Skip to content
forked from pydata/xarray

Commit 4489394

Browse files
committed
Merge remote-tracking branch 'upstream/master' into fix/plot-broadcast
* upstream/master: format indexing.rst code with black (pydata#3511) add missing pint integration tests (pydata#3508) DOC: update bottleneck repo url (pydata#3507) add drop_sel, drop_vars, map to api.rst (pydata#3506) remove syntax warning (pydata#3505) Dataset.map, GroupBy.map, Resample.map (pydata#3459) tests for datasets with units (pydata#3447) fix pandas-dev tests (pydata#3491) unpin pseudonetcdf (pydata#3496) whatsnew corrections (pydata#3494) drop_vars; deprecate drop for variables (pydata#3475) uamiv test using only raw uamiv variables (pydata#3485) Optimize dask array equality checks. (pydata#3453) Propagate indexes in DataArray binary operations. (pydata#3481) python 3.8 tests (pydata#3477)
2 parents 279ff1d + b74f80c commit 4489394

37 files changed

+2703
-417
lines changed

azure-pipelines.yml

+2
Original file line numberDiff line numberDiff line change
@@ -18,6 +18,8 @@ jobs:
1818
conda_env: py36
1919
py37:
2020
conda_env: py37
21+
py38:
22+
conda_env: py38
2123
py37-upstream-dev:
2224
conda_env: py37
2325
upstream_dev: true

ci/azure/install.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ steps:
1616
--pre \
1717
--upgrade \
1818
matplotlib \
19-
pandas=0.26.0.dev0+628.g03c1a3db2 \ # FIXME https://github.com/pydata/xarray/issues/3440
19+
pandas \
2020
scipy
2121
# numpy \ # FIXME https://github.com/pydata/xarray/issues/3409
2222
pip install \

ci/requirements/py36.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ dependencies:
2929
- pandas
3030
- pint
3131
- pip
32-
- pseudonetcdf<3.1 # FIXME https://github.com/pydata/xarray/issues/3409
32+
- pseudonetcdf
3333
- pydap
3434
- pynio
3535
- pytest

ci/requirements/py37-windows.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ dependencies:
2929
- pandas
3030
- pint
3131
- pip
32-
- pseudonetcdf<3.1 # FIXME https://github.com/pydata/xarray/issues/3409
32+
- pseudonetcdf
3333
- pydap
3434
# - pynio # Not available on Windows
3535
- pytest

ci/requirements/py37.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ dependencies:
2929
- pandas
3030
- pint
3131
- pip
32-
- pseudonetcdf<3.1 # FIXME https://github.com/pydata/xarray/issues/3409
32+
- pseudonetcdf
3333
- pydap
3434
- pynio
3535
- pytest

ci/requirements/py38.yml

+15
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,15 @@
1+
name: xarray-tests
2+
channels:
3+
- conda-forge
4+
dependencies:
5+
- python=3.8
6+
- pip
7+
- pip:
8+
- coveralls
9+
- dask
10+
- distributed
11+
- numpy
12+
- pandas
13+
- pytest
14+
- pytest-cov
15+
- pytest-env

doc/api.rst

+8-6
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ Dataset contents
9494
Dataset.rename_dims
9595
Dataset.swap_dims
9696
Dataset.expand_dims
97-
Dataset.drop
97+
Dataset.drop_vars
9898
Dataset.drop_dims
9999
Dataset.set_coords
100100
Dataset.reset_coords
@@ -118,6 +118,7 @@ Indexing
118118
Dataset.loc
119119
Dataset.isel
120120
Dataset.sel
121+
Dataset.drop_sel
121122
Dataset.head
122123
Dataset.tail
123124
Dataset.thin
@@ -154,7 +155,7 @@ Computation
154155
.. autosummary::
155156
:toctree: generated/
156157

157-
Dataset.apply
158+
Dataset.map
158159
Dataset.reduce
159160
Dataset.groupby
160161
Dataset.groupby_bins
@@ -263,7 +264,7 @@ DataArray contents
263264
DataArray.rename
264265
DataArray.swap_dims
265266
DataArray.expand_dims
266-
DataArray.drop
267+
DataArray.drop_vars
267268
DataArray.reset_coords
268269
DataArray.copy
269270

@@ -283,6 +284,7 @@ Indexing
283284
DataArray.loc
284285
DataArray.isel
285286
DataArray.sel
287+
DataArray.drop_sel
286288
DataArray.head
287289
DataArray.tail
288290
DataArray.thin
@@ -542,10 +544,10 @@ GroupBy objects
542544
:toctree: generated/
543545

544546
core.groupby.DataArrayGroupBy
545-
core.groupby.DataArrayGroupBy.apply
547+
core.groupby.DataArrayGroupBy.map
546548
core.groupby.DataArrayGroupBy.reduce
547549
core.groupby.DatasetGroupBy
548-
core.groupby.DatasetGroupBy.apply
550+
core.groupby.DatasetGroupBy.map
549551
core.groupby.DatasetGroupBy.reduce
550552

551553
Rolling objects
@@ -566,7 +568,7 @@ Resample objects
566568
================
567569

568570
Resample objects also implement the GroupBy interface
569-
(methods like ``apply()``, ``reduce()``, ``mean()``, ``sum()``, etc.).
571+
(methods like ``map()``, ``reduce()``, ``mean()``, ``sum()``, etc.).
570572

571573
.. autosummary::
572574
:toctree: generated/

doc/computation.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -183,7 +183,7 @@ a value when aggregating:
183183

184184
Note that rolling window aggregations are faster and use less memory when bottleneck_ is installed. This only applies to numpy-backed xarray objects.
185185

186-
.. _bottleneck: https://github.com/kwgoodman/bottleneck/
186+
.. _bottleneck: https://github.com/pydata/bottleneck/
187187

188188
We can also manually iterate through ``Rolling`` objects:
189189

@@ -462,13 +462,13 @@ Datasets support most of the same methods found on data arrays:
462462
abs(ds)
463463
464464
Datasets also support NumPy ufuncs (requires NumPy v1.13 or newer), or
465-
alternatively you can use :py:meth:`~xarray.Dataset.apply` to apply a function
465+
alternatively you can use :py:meth:`~xarray.Dataset.map` to map a function
466466
to each variable in a dataset:
467467

468468
.. ipython:: python
469469
470470
np.sin(ds)
471-
ds.apply(np.sin)
471+
ds.map(np.sin)
472472
473473
Datasets also use looping over variables for *broadcasting* in binary
474474
arithmetic. You can do arithmetic between any ``DataArray`` and a dataset:

doc/dask.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -292,7 +292,7 @@ For the best performance when using Dask's multi-threaded scheduler, wrap a
292292
function that already releases the global interpreter lock, which fortunately
293293
already includes most NumPy and Scipy functions. Here we show an example
294294
using NumPy operations and a fast function from
295-
`bottleneck <https://github.com/kwgoodman/bottleneck>`__, which
295+
`bottleneck <https://github.com/pydata/bottleneck>`__, which
296296
we use to calculate `Spearman's rank-correlation coefficient <https://en.wikipedia.org/wiki/Spearman%27s_rank_correlation_coefficient>`__:
297297

298298
.. code-block:: python

doc/data-structures.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -393,14 +393,14 @@ methods (like pandas) for transforming datasets into new objects.
393393

394394
For removing variables, you can select and drop an explicit list of
395395
variables by indexing with a list of names or using the
396-
:py:meth:`~xarray.Dataset.drop` methods to return a new ``Dataset``. These
396+
:py:meth:`~xarray.Dataset.drop_vars` methods to return a new ``Dataset``. These
397397
operations keep around coordinates:
398398

399399
.. ipython:: python
400400
401401
ds[['temperature']]
402402
ds[['temperature', 'temperature_double']]
403-
ds.drop('temperature')
403+
ds.drop_vars('temperature')
404404
405405
To remove a dimension, you can use :py:meth:`~xarray.Dataset.drop_dims` method.
406406
Any variables using that dimension are dropped:

doc/groupby.rst

+8-7
Original file line numberDiff line numberDiff line change
@@ -35,10 +35,11 @@ Let's create a simple example dataset:
3535
3636
.. ipython:: python
3737
38-
ds = xr.Dataset({'foo': (('x', 'y'), np.random.rand(4, 3))},
39-
coords={'x': [10, 20, 30, 40],
40-
'letters': ('x', list('abba'))})
41-
arr = ds['foo']
38+
ds = xr.Dataset(
39+
{"foo": (("x", "y"), np.random.rand(4, 3))},
40+
coords={"x": [10, 20, 30, 40], "letters": ("x", list("abba"))},
41+
)
42+
arr = ds["foo"]
4243
ds
4344
4445
If we groupby the name of a variable or coordinate in a dataset (we can also
@@ -93,15 +94,15 @@ Apply
9394
~~~~~
9495

9596
To apply a function to each group, you can use the flexible
96-
:py:meth:`~xarray.DatasetGroupBy.apply` method. The resulting objects are automatically
97+
:py:meth:`~xarray.DatasetGroupBy.map` method. The resulting objects are automatically
9798
concatenated back together along the group axis:
9899

99100
.. ipython:: python
100101
101102
def standardize(x):
102103
return (x - x.mean()) / x.std()
103104
104-
arr.groupby('letters').apply(standardize)
105+
arr.groupby('letters').map(standardize)
105106
106107
GroupBy objects also have a :py:meth:`~xarray.DatasetGroupBy.reduce` method and
107108
methods like :py:meth:`~xarray.DatasetGroupBy.mean` as shortcuts for applying an
@@ -202,7 +203,7 @@ __ http://cfconventions.org/cf-conventions/v1.6.0/cf-conventions.html#_two_dimen
202203
dims=['ny','nx'])
203204
da
204205
da.groupby('lon').sum(...)
205-
da.groupby('lon').apply(lambda x: x - x.mean(), shortcut=False)
206+
da.groupby('lon').map(lambda x: x - x.mean(), shortcut=False)
206207
207208
Because multidimensional groups have the ability to generate a very large
208209
number of bins, coarse-binning via :py:meth:`~xarray.Dataset.groupby_bins`

doc/howdoi.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ How do I ...
4444
* - convert a possibly irregularly sampled timeseries to a regularly sampled timeseries
4545
- :py:meth:`DataArray.resample`, :py:meth:`Dataset.resample` (see :ref:`resampling` for more)
4646
* - apply a function on all data variables in a Dataset
47-
- :py:meth:`Dataset.apply`
47+
- :py:meth:`Dataset.map`
4848
* - write xarray objects with complex values to a netCDF file
4949
- :py:func:`Dataset.to_netcdf`, :py:func:`DataArray.to_netcdf` specifying ``engine="h5netcdf", invalid_netcdf=True``
5050
* - make xarray objects look like other xarray objects

0 commit comments

Comments
 (0)