-
Notifications
You must be signed in to change notification settings - Fork 4
Smoke valid args for binary ufunc tests #18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Changes from all commits
Commits
Show all changes
22 commits
Select commit
Hold shift + click to select a range
08cc8ed
ENH: introduce scalar type hierarchy
ev-br fd4a4d9
TST: undo (some) test skips of test with scalars
ev-br 5b4d716
ENH: add np.issubdtype checker to mimic numpy
ev-br c5f4949
ENH: introduce scalar type hierarchy
ev-br e1fa959
TST: undo (some) test skips of test with scalars
ev-br 5c2e6f9
ENH: add np.issubdtype checker to mimic numpy
ev-br 7622143
MAINT: adapt assert_equal, assert_array_equal
ev-br 0a391da
TST: fix test_scalar_ctors from numpy
ev-br 1c8900e
MAINT: test_scalar_methods from numpy
ev-br 43a894d
MAINT: numpy-vendored tests get through the collection stage
ev-br 5c9adde
MAINT: multiple assorted fixes to make numpy tests pass
ev-br c0d5113
BUG: np.asarray(arr) returns arr not a copy
ev-br 09ce7e0
BUG: fix import in test_ufunc_basic
ev-br 0b5e9a7
API: add tests to stipulate equivalence of arrays scalars and 0D arrays
ev-br 5be93f1
TST: test_numerictypes: remove definitely unsupported things
ev-br a07fab6
BUG: fix the scalar type hierarchy, so that issubdtype works.
ev-br eec3bba
ENH: add dtype.itemsize, rm a buch of tests of timedelta, dtype(str) …
ev-br adf9c73
ENH: dtypes pickle/unpickle
ev-br 2d7d932
TST: test_dtype from NumPy passes (with skips/fails, of course)
ev-br 6993215
ENH: add iinfo, finfo
ev-br 2830ada
MAINT: update .gitignore
ev-br 91b3cc6
Rudiementary autogen binary ufuncs input type fix
honno File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,7 +1,5 @@ | ||
__pycache__/* | ||
autogen/__pycache__ | ||
torch_np/__pycache__/* | ||
torch_np/tests/__pycache__/* | ||
torch_np/tests/numpy_tests/core/__pycache__/* | ||
# Byte-compiled / optimized / DLL files | ||
__pycache__/ | ||
*.py[cod] | ||
.coverage | ||
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,12 +1,15 @@ | ||
from ._dtypes import * | ||
from ._scalar_types import * | ||
from ._wrapper import * | ||
from . import testing | ||
#from . import testing | ||
|
||
from ._unary_ufuncs import * | ||
from ._binary_ufuncs import * | ||
from ._ndarray import can_cast, result_type, newaxis | ||
from ._util import AxisError | ||
|
||
from ._getlimits import iinfo, finfo | ||
from ._getlimits import errstate | ||
|
||
inf = float('inf') | ||
nan = float('nan') | ||
|
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
import torch | ||
from . import _dtypes | ||
|
||
def finfo(dtyp): | ||
torch_dtype = _dtypes.torch_dtype_from(dtyp) | ||
return torch.finfo(torch_dtype) | ||
|
||
|
||
def iinfo(dtyp): | ||
torch_dtype = _dtypes.torch_dtype_from(dtyp) | ||
return torch.iinfo(torch_dtype) | ||
|
||
|
||
import contextlib | ||
|
||
# FIXME: this is only a stub | ||
@contextlib.contextmanager | ||
def errstate(*args, **kwds): | ||
yield |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -110,6 +110,16 @@ def __neq__(self, other): | |
def __gt__(self, other): | ||
return asarray(self._tensor > asarray(other).get()) | ||
|
||
def __lt__(self, other): | ||
return asarray(self._tensor < asarray(other).get()) | ||
|
||
def __ge__(self, other): | ||
return asarray(self._tensor >= asarray(other).get()) | ||
|
||
def __le__(self, other): | ||
return asarray(self._tensor <= asarray(other).get()) | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. NB: this will need to be redone similar to gh-17 |
||
|
||
|
||
def __bool__(self): | ||
try: | ||
return bool(self._tensor) | ||
|
@@ -131,6 +141,15 @@ def __hash__(self): | |
def __float__(self): | ||
return float(self._tensor) | ||
|
||
# XXX : are single-element ndarrays scalars? | ||
def is_integer(self): | ||
if self.shape == (): | ||
if _dtypes.is_integer(self.dtype): | ||
return True | ||
return self._tensor.item().is_integer() | ||
else: | ||
return False | ||
|
||
|
||
### sequence ### | ||
def __len__(self): | ||
|
@@ -162,6 +181,15 @@ def __truediv__(self, other): | |
other_tensor = asarray(other).get() | ||
return asarray(self._tensor.__truediv__(other_tensor)) | ||
|
||
def __or__(self, other): | ||
other_tensor = asarray(other).get() | ||
return asarray(self._tensor.__or__(other_tensor)) | ||
|
||
def __ior__(self, other): | ||
other_tensor = asarray(other).get() | ||
return asarray(self._tensor.__ior__(other_tensor)) | ||
|
||
|
||
def __invert__(self): | ||
return asarray(self._tensor.__invert__()) | ||
|
||
|
@@ -307,7 +335,8 @@ def sum(self, axis=None, dtype=None, out=None, keepdims=NoValue, | |
|
||
### indexing ### | ||
def __getitem__(self, *args, **kwds): | ||
return ndarray._from_tensor_and_base(self._tensor.__getitem__(*args, **kwds), self) | ||
t_args = _helpers.to_tensors(*args) | ||
return ndarray._from_tensor_and_base(self._tensor.__getitem__(*t_args, **kwds), self) | ||
|
||
def __setitem__(self, index, value): | ||
value = asarray(value).get() | ||
|
@@ -320,6 +349,8 @@ def asarray(a, dtype=None, order=None, *, like=None): | |
raise NotImplementedError | ||
|
||
if isinstance(a, ndarray): | ||
if dtype is not None and dtype != a.dtype: | ||
a = a.astype(dtype) | ||
return a | ||
|
||
if isinstance(a, (list, tuple)): | ||
|
@@ -356,6 +387,10 @@ def array(object, dtype=None, *, copy=True, order='K', subok=False, ndmin=0, | |
|
||
if isinstance(object, ndarray): | ||
result = object._tensor | ||
|
||
if dtype != object.dtype: | ||
torch_dtype = _dtypes.torch_dtype_from(dtype) | ||
result = result.to(torch_dtype) | ||
else: | ||
torch_dtype = _dtypes.torch_dtype_from(dtype) | ||
result = torch.as_tensor(object, dtype=torch_dtype) | ||
|
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure why it is better than raising? (Uhm, that was me. Meaning this code is not right, long tern)