Skip to content

MAINT: capitalization nits #171

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Mar 20, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions src/array_api_extra/_lib/_at.py
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ class at: # pylint: disable=invalid-name # numpydoc ignore=PR02

>>> x = x.at[1].add(2)

If x is a read-only numpy array, they are the same as::
If x is a read-only NumPy array, they are the same as::

>>> x = x.copy()
>>> x[1] += 2
Expand Down Expand Up @@ -430,7 +430,7 @@ def min(
"""Apply ``x[idx] = minimum(x[idx], y)`` and return the updated array."""
# On Dask, this function runs on the chunks, so we need to determine the
# namespace that Dask is wrapping.
# Note that da.minimum _incidentally_ works on numpy, cupy, and sparse
# Note that da.minimum _incidentally_ works on NumPy, CuPy, and sparse
# thanks to all these meta-namespaces implementing the __array_ufunc__
# interface, but there's no guarantee that it will work for other
# wrapped libraries in the future.
Expand Down
4 changes: 2 additions & 2 deletions src/array_api_extra/_lib/_funcs.py
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ def broadcast_shapes(*shapes: tuple[float | None, ...]) -> tuple[int | None, ...
(4, 2, 3)
"""
if not shapes:
return () # Match numpy output
return () # Match NumPy output

ndim = max(len(shape) for shape in shapes)
out: list[int | None] = []
Expand Down Expand Up @@ -538,7 +538,7 @@ def isclose(
a_inexact = xp.isdtype(a.dtype, ("real floating", "complex floating"))
b_inexact = xp.isdtype(b.dtype, ("real floating", "complex floating"))
if a_inexact or b_inexact:
# prevent warnings on numpy and dask on inf - inf
# prevent warnings on NumPy and Dask on inf - inf
mxp = meta_namespace(a, b, xp=xp)
out = apply_where(
xp.isinf(a) | xp.isinf(b),
Expand Down
8 changes: 4 additions & 4 deletions src/array_api_extra/_lib/_lazy.py
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ def lazy_apply( # type: ignore[valid-type] # numpydoc ignore=GL07,SA04
One or more Array API compliant arrays, Python scalars, or None's.

If `as_numpy=True`, you need to be able to apply :func:`numpy.asarray` to
non-None args to convert them to numpy; read notes below about specific
non-None args to convert them to NumPy; read notes below about specific
backends.
shape : tuple[int | None, ...] | Sequence[tuple[int | None, ...]], optional
Output shape or sequence of output shapes, one for each output of `func`.
Expand All @@ -97,7 +97,7 @@ def lazy_apply( # type: ignore[valid-type] # numpydoc ignore=GL07,SA04
Default: infer the result type(s) from the input arrays.
as_numpy : bool, optional
If True, convert the input arrays to NumPy before passing them to `func`.
This is particularly useful to make numpy-only functions, e.g. written in Cython
This is particularly useful to make NumPy-only functions, e.g. written in Cython
or Numba, work transparently with array API-compliant arrays.
Default: False.
xp : array_namespace, optional
Expand Down Expand Up @@ -143,8 +143,8 @@ def lazy_apply( # type: ignore[valid-type] # numpydoc ignore=GL07,SA04
<https://sparse.pydata.org/en/stable/operations.html#package-configuration>`_.

Dask
This allows applying eager functions to dask arrays.
The dask graph won't be computed.
This allows applying eager functions to Dask arrays.
The Dask graph won't be computed.

`lazy_apply` doesn't know if `func` reduces along any axes; also, shape
changes are non-trivial in chunked Dask arrays. For these reasons, all inputs
Expand Down
4 changes: 2 additions & 2 deletions src/array_api_extra/testing.py
Original file line number Diff line number Diff line change
Expand Up @@ -63,12 +63,12 @@ def lazy_xp_function( # type: ignore[explicit-any]
Number of times `func` is allowed to internally materialize the Dask graph. This
is typically triggered by ``bool()``, ``float()``, or ``np.asarray()``.

Set to 1 if you are aware that `func` converts the input parameters to numpy and
Set to 1 if you are aware that `func` converts the input parameters to NumPy and
want to let it do so at least for the time being, knowing that it is going to be
extremely detrimental for performance.

If a test needs values higher than 1 to pass, it is a canary that the conversion
to numpy/bool/float is happening multiple times, which translates to multiple
to NumPy/bool/float is happening multiple times, which translates to multiple
computations of the whole graph. Short of making the function fully lazy, you
should at least add explicit calls to ``np.asarray()`` early in the function.
*Note:* the counter of `allow_dask_compute` resets after each call to `func`, so
Expand Down
2 changes: 1 addition & 1 deletion tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ class NumPyReadOnly:
"""
Variant of array_api_compat.numpy producing read-only arrays.

Read-only numpy arrays fail on `__iadd__` etc., whereas read-only libraries such as
Read-only NumPy arrays fail on `__iadd__` etc., whereas read-only libraries such as
JAX and Sparse simply don't define those methods, which makes calls to `+=` fall
back to `__add__`.

Expand Down
2 changes: 1 addition & 1 deletion tests/test_at.py
Original file line number Diff line number Diff line change
Expand Up @@ -318,7 +318,7 @@ def test_gh134(xp: ModuleType, bool_mask: bool, copy: bool | None):
"""
x = xp.zeros(1)

# In numpy, we have a writeable np.ndarray in input and a read-only np.generic in
# In NumPy, we have a writeable np.ndarray in input and a read-only np.generic in
# output. As both are Arrays, this behaviour is Array API compliant.
# In Dask, we have a writeable da.Array on both sides, and if you call __setitem__
# on it all seems fine, but when you compute() your graph is corrupted.
Expand Down
8 changes: 4 additions & 4 deletions tests/test_lazy.py
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ def f(x: Array) -> Array:
assert xp2 is xp

y = xp2.broadcast_to(xp2.astype(x + 1, getattr(xp2, dtype)), shape)
return xp2.asarray(y, copy=True) # Torch: ensure writeable numpy array
return xp2.asarray(y, copy=True) # PyTorch: ensure writeable NumPy array

x = xp.asarray([1, 2], dtype=xp.int16)
expect = xp.broadcast_to(xp.astype(x + 1, getattr(xp, dtype)), shape)
Expand All @@ -74,7 +74,7 @@ def f(x: Array) -> tuple[Array, Array]:
xp2 = array_namespace(x)
y = x + xp2.asarray(2, dtype=xp2.int8) # Sparse: bad dtype propagation
z = xp2.broadcast_to(xp2.astype(x + 1, xp2.int16), (3, 2))
z = xp2.asarray(z, copy=True) # Torch: ensure writeable numpy array
z = xp2.asarray(z, copy=True) # PyTorch: ensure writeable NumPy array
return y, z

x = xp.asarray([1, 2], dtype=xp.int8)
Expand Down Expand Up @@ -166,8 +166,8 @@ def f(x: Array) -> Array:


def test_lazy_apply_dask_non_numpy_meta(da: ModuleType):
"""Test dask wrapping around a meta-namespace other than numpy."""
# At the moment of writing, of all Array API namespaces cupy is
"""Test Dask wrapping around a meta-namespace other than numpy."""
# At the moment of writing, of all Array API namespaces CuPy is
# the only one that Dask supports.
# For this reason, we can only test as_numpy=False since
# np.asarray(cp.Array) is blocked by the transfer guard.
Expand Down
18 changes: 9 additions & 9 deletions tests/test_testing.py
Original file line number Diff line number Diff line change
Expand Up @@ -74,7 +74,7 @@ def test_assert_close_tolerance(xp: ModuleType):
@param_assert_equal_close
@pytest.mark.xfail_xp_backend(Backend.SPARSE, reason="index by sparse array")
def test_assert_close_equal_none_shape(xp: ModuleType, func: Callable[..., None]): # type: ignore[explicit-any]
"""On dask and other lazy backends, test that a shape with NaN's or None's
"""On Dask and other lazy backends, test that a shape with NaN's or None's
can be compared to a real shape.
"""
a = xp.asarray([1, 2])
Expand All @@ -99,18 +99,18 @@ def test_assert_close_equal_none_shape(xp: ModuleType, func: Callable[..., None]


def good_lazy(x: Array) -> Array:
"""A function that behaves well in dask and jax.jit"""
"""A function that behaves well in Dask and jax.jit"""
return x * 2.0


def non_materializable(x: Array) -> Array:
"""
This function materializes the input array, so it will fail when wrapped in jax.jit
and it will trigger an expensive computation in dask.
and it will trigger an expensive computation in Dask.
"""
xp = array_namespace(x)
# Crashes inside jax.jit
# On dask, this triggers two computations of the whole graph
# On Dask, this triggers two computations of the whole graph
if xp.any(x < 0.0) or xp.any(x > 10.0):
msg = "Values must be in the [0, 10] range"
raise ValueError(msg)
Expand Down Expand Up @@ -217,20 +217,20 @@ def test_lazy_xp_function_static_params(xp: ModuleType, func: Callable[..., Arra
erf = None


@pytest.mark.filterwarnings("ignore:__array_wrap__:DeprecationWarning") # torch
@pytest.mark.filterwarnings("ignore:__array_wrap__:DeprecationWarning") # PyTorch
def test_lazy_xp_function_cython_ufuncs(xp: ModuleType, library: Backend):
pytest.importorskip("scipy")
assert erf is not None
x = xp.asarray([6.0, 7.0])
if library in (Backend.ARRAY_API_STRICT, Backend.JAX):
# array-api-strict arrays are auto-converted to numpy
# array-api-strict arrays are auto-converted to NumPy
# which results in an assertion error for mismatched namespaces
# eager jax arrays are auto-converted to numpy in eager jax
# eager JAX arrays are auto-converted to NumPy in eager JAX
# and fail in jax.jit (which lazy_xp_function tests here)
with pytest.raises((TypeError, AssertionError)):
xp_assert_equal(cast(Array, erf(x)), xp.asarray([1.0, 1.0]))
else:
# cupy, dask and sparse define __array_ufunc__ and dispatch accordingly
# CuPy, Dask and sparse define __array_ufunc__ and dispatch accordingly
# note that when sparse reduces to scalar it returns a np.generic, which
# would make xp_assert_equal fail.
xp_assert_equal(cast(Array, erf(x)), xp.asarray([1.0, 1.0]))
Expand Down Expand Up @@ -271,7 +271,7 @@ def test_lazy_xp_function_eagerly_raises(da: ModuleType):

def f(x: Array) -> Array:
xp = array_namespace(x)
# Crash in jax.jit and trigger compute() on dask
# Crash in jax.jit and trigger compute() on Dask
if not xp.all(x):
msg = "Values must be non-zero"
raise ValueError(msg)
Expand Down