Skip to content
forked from pydata/xarray

Commit 9706b5a

Browse files
committed
Merge branch 'master' into fix/plot-broadcast
* master: (24 commits) Tweaks to release instructions (pydata#3555) Clarify conda environments for new contributors (pydata#3551) Revert to dev version 0.14.1 whatsnew (pydata#3547) sparse option to reindex and unstack (pydata#3542) Silence sphinx warnings (pydata#3516) Numpy 1.18 support (pydata#3537) tweak whats-new. (pydata#3540) small simplification of rename from pydata#3532 (pydata#3539) Added fill_value for unstack (pydata#3541) Add DatasetGroupBy.quantile (pydata#3527) ensure rename does not change index type (pydata#3532) Leave empty slot when not using accessors interpolate_na: Add max_gap support. (pydata#3302) units & deprecation merge (pydata#3530) Fix set_index when an existing dimension becomes a level (pydata#3520) add Variable._replace (pydata#3528) Tests for module-level functions with units (pydata#3493) Harmonize `FillValue` and `missing_value` during encoding and decoding steps (pydata#3502) FUNDING.yml (pydata#3523) ...
2 parents d430ae0 + 8d09879 commit 9706b5a

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

43 files changed

+2208
-430
lines changed

.github/FUNDING.yml

+2
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
github: numfocus
2+
custom: http://numfocus.org/donate-to-xarray

HOW_TO_RELEASE renamed to HOW_TO_RELEASE.md

+40-11
Original file line numberDiff line numberDiff line change
@@ -1,9 +1,11 @@
1-
How to issue an xarray release in 15 easy steps
1+
How to issue an xarray release in 14 easy steps
22

33
Time required: about an hour.
44

55
1. Ensure your master branch is synced to upstream:
6-
git pull upstream master
6+
```
7+
git pull upstream master
8+
```
79
2. Look over whats-new.rst and the docs. Make sure "What's New" is complete
810
(check the date!) and consider adding a brief summary note describing the
911
release at the top.
@@ -12,37 +14,53 @@ Time required: about an hour.
1214
- Function/method references should include links to the API docs.
1315
- Sometimes notes get added in the wrong section of whats-new, typically
1416
due to a bad merge. Check for these before a release by using git diff,
15-
e.g., ``git diff v0.X.Y whats-new.rst`` where 0.X.Y is the previous
17+
e.g., `git diff v0.X.Y whats-new.rst` where 0.X.Y is the previous
1618
release.
1719
3. If you have any doubts, run the full test suite one final time!
18-
py.test
20+
```
21+
pytest
22+
```
1923
4. On the master branch, commit the release in git:
24+
```
2025
git commit -a -m 'Release v0.X.Y'
26+
```
2127
5. Tag the release:
28+
```
2229
git tag -a v0.X.Y -m 'v0.X.Y'
30+
```
2331
6. Build source and binary wheels for pypi:
32+
```
2433
git clean -xdf # this deletes all uncommited changes!
2534
python setup.py bdist_wheel sdist
35+
```
2636
7. Use twine to register and upload the release on pypi. Be careful, you can't
2737
take this back!
38+
```
2839
twine upload dist/xarray-0.X.Y*
40+
```
2941
You will need to be listed as a package owner at
3042
https://pypi.python.org/pypi/xarray for this to work.
3143
8. Push your changes to master:
44+
```
3245
git push upstream master
3346
git push upstream --tags
47+
```
3448
9. Update the stable branch (used by ReadTheDocs) and switch back to master:
49+
```
3550
git checkout stable
3651
git rebase master
3752
git push upstream stable
3853
git checkout master
39-
It's OK to force push to 'stable' if necessary.
40-
We also update the stable branch with `git cherrypick` for documentation
41-
only fixes that apply the current released version.
54+
```
55+
It's OK to force push to 'stable' if necessary. (We also update the stable
56+
branch with `git cherrypick` for documentation only fixes that apply the
57+
current released version.)
4258
10. Add a section for the next release (v.X.(Y+1)) to doc/whats-new.rst.
4359
11. Commit your changes and push to master again:
44-
git commit -a -m 'Revert to dev version'
60+
```
61+
git commit -a -m 'New whatsnew section'
4562
git push upstream master
63+
```
4664
You're done pushing to master!
4765
12. Issue the release on GitHub. Click on "Draft a new release" at
4866
https://github.com/pydata/xarray/releases. Type in the version number, but
@@ -53,11 +71,22 @@ Time required: about an hour.
5371
14. Issue the release announcement! For bug fix releases, I usually only email
5472
[email protected]. For major/feature releases, I will email a broader
5573
list (no more than once every 3-6 months):
56-
57-
58-
74+
75+
76+
77+
78+
79+
5980
Google search will turn up examples of prior release announcements (look for
6081
"ANN xarray").
82+
You can get a list of contributors with:
83+
```
84+
git log "$(git tag --sort="v:refname" | sed -n 'x;$p').." --format="%aN" | sort -u
85+
```
86+
or by replacing `v0.X.Y` with the _previous_ release in:
87+
```
88+
git log v0.X.Y.. --format="%aN" | sort -u
89+
```
6190
6291
Note on version numbering:
6392

ci/azure/install.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,9 @@ steps:
1616
--pre \
1717
--upgrade \
1818
matplotlib \
19+
numpy \
1920
pandas \
2021
scipy
21-
# numpy \ # FIXME https://github.com/pydata/xarray/issues/3409
2222
pip install \
2323
--no-deps \
2424
--upgrade \

ci/requirements/py36.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ dependencies:
2525
- nc-time-axis
2626
- netcdf4
2727
- numba
28-
- numpy<1.18 # FIXME https://github.com/pydata/xarray/issues/3409
28+
- numpy
2929
- pandas
3030
- pint
3131
- pip

ci/requirements/py37.yml

+1-1
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ dependencies:
2525
- nc-time-axis
2626
- netcdf4
2727
- numba
28-
- numpy<1.18 # FIXME https://github.com/pydata/xarray/issues/3409
28+
- numpy
2929
- pandas
3030
- pint
3131
- pip

doc/README.rst

+2
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
:orphan:
2+
13
xarray
24
------
35

doc/api-hidden.rst

+5
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22
.. This extra page is a work around for sphinx not having any support for
33
.. hiding an autosummary table.
44
5+
:orphan:
6+
57
.. currentmodule:: xarray
68

79
.. autosummary::
@@ -30,9 +32,11 @@
3032
core.groupby.DatasetGroupBy.first
3133
core.groupby.DatasetGroupBy.last
3234
core.groupby.DatasetGroupBy.fillna
35+
core.groupby.DatasetGroupBy.quantile
3336
core.groupby.DatasetGroupBy.where
3437

3538
Dataset.argsort
39+
Dataset.astype
3640
Dataset.clip
3741
Dataset.conj
3842
Dataset.conjugate
@@ -71,6 +75,7 @@
7175
core.groupby.DataArrayGroupBy.first
7276
core.groupby.DataArrayGroupBy.last
7377
core.groupby.DataArrayGroupBy.fillna
78+
core.groupby.DataArrayGroupBy.quantile
7479
core.groupby.DataArrayGroupBy.where
7580

7681
DataArray.argsort

doc/combining.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -255,11 +255,11 @@ Combining along multiple dimensions
255255
``combine_nested``.
256256

257257
For combining many objects along multiple dimensions xarray provides
258-
:py:func:`~xarray.combine_nested`` and :py:func:`~xarray.combine_by_coords`. These
258+
:py:func:`~xarray.combine_nested` and :py:func:`~xarray.combine_by_coords`. These
259259
functions use a combination of ``concat`` and ``merge`` across different
260260
variables to combine many objects into one.
261261

262-
:py:func:`~xarray.combine_nested`` requires specifying the order in which the
262+
:py:func:`~xarray.combine_nested` requires specifying the order in which the
263263
objects should be combined, while :py:func:`~xarray.combine_by_coords` attempts to
264264
infer this ordering automatically from the coordinates in the data.
265265

@@ -310,4 +310,4 @@ These functions can be used by :py:func:`~xarray.open_mfdataset` to open many
310310
files as one dataset. The particular function used is specified by setting the
311311
argument ``'combine'`` to ``'by_coords'`` or ``'nested'``. This is useful for
312312
situations where your data is split across many files in multiple locations,
313-
which have some known relationship between one another.
313+
which have some known relationship between one another.

doc/computation.rst

+6-3
Original file line numberDiff line numberDiff line change
@@ -95,6 +95,9 @@ for filling missing values via 1D interpolation.
9595
Note that xarray slightly diverges from the pandas ``interpolate`` syntax by
9696
providing the ``use_coordinate`` keyword which facilitates a clear specification
9797
of which values to use as the index in the interpolation.
98+
xarray also provides the ``max_gap`` keyword argument to limit the interpolation to
99+
data gaps of length ``max_gap`` or smaller. See :py:meth:`~xarray.DataArray.interpolate_na`
100+
for more.
98101

99102
Aggregation
100103
===========
@@ -322,8 +325,8 @@ Broadcasting by dimension name
322325
``DataArray`` objects are automatically align themselves ("broadcasting" in
323326
the numpy parlance) by dimension name instead of axis order. With xarray, you
324327
do not need to transpose arrays or insert dimensions of length 1 to get array
325-
operations to work, as commonly done in numpy with :py:func:`np.reshape` or
326-
:py:const:`np.newaxis`.
328+
operations to work, as commonly done in numpy with :py:func:`numpy.reshape` or
329+
:py:data:`numpy.newaxis`.
327330

328331
This is best illustrated by a few examples. Consider two one-dimensional
329332
arrays with different sizes aligned along different dimensions:
@@ -563,7 +566,7 @@ to set ``axis=-1``. As an example, here is how we would wrap
563566
564567
Because ``apply_ufunc`` follows a standard convention for ufuncs, it plays
565568
nicely with tools for building vectorized functions, like
566-
:func:`numpy.broadcast_arrays` and :func:`numpy.vectorize`. For high performance
569+
:py:func:`numpy.broadcast_arrays` and :py:class:`numpy.vectorize`. For high performance
567570
needs, consider using Numba's :doc:`vectorize and guvectorize <numba:user/vectorize>`.
568571

569572
In addition to wrapping functions, ``apply_ufunc`` can automatically parallelize

doc/conf.py

+7-5
Original file line numberDiff line numberDiff line change
@@ -340,9 +340,11 @@
340340
# Example configuration for intersphinx: refer to the Python standard library.
341341
intersphinx_mapping = {
342342
"python": ("https://docs.python.org/3/", None),
343-
"pandas": ("https://pandas.pydata.org/pandas-docs/stable/", None),
344-
"iris": ("http://scitools.org.uk/iris/docs/latest/", None),
345-
"numpy": ("https://docs.scipy.org/doc/numpy/", None),
346-
"numba": ("https://numba.pydata.org/numba-doc/latest/", None),
347-
"matplotlib": ("https://matplotlib.org/", None),
343+
"pandas": ("https://pandas.pydata.org/pandas-docs/stable", None),
344+
"iris": ("https://scitools.org.uk/iris/docs/latest", None),
345+
"numpy": ("https://docs.scipy.org/doc/numpy", None),
346+
"scipy": ("https://docs.scipy.org/doc/scipy/reference", None),
347+
"numba": ("https://numba.pydata.org/numba-doc/latest", None),
348+
"matplotlib": ("https://matplotlib.org", None),
349+
"dask": ("https://docs.dask.org/en/latest", None),
348350
}

doc/contributing.rst

+3-1
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,9 @@ We'll now kick off a two-step process:
151151
.. code-block:: none
152152
153153
# Create and activate the build environment
154-
conda env create -f ci/requirements/py36.yml
154+
# This is for Linux and MacOS. On Windows, use py37-windows.yml instead.
155+
conda env create -f ci/requirements/py37.yml
156+
155157
conda activate xarray-tests
156158
157159
# or with older versions of Anaconda:

doc/dask.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -285,7 +285,7 @@ automate `embarrassingly parallel
285285
<https://en.wikipedia.org/wiki/Embarrassingly_parallel>`__ "map" type operations
286286
where a function written for processing NumPy arrays should be repeatedly
287287
applied to xarray objects containing Dask arrays. It works similarly to
288-
:py:func:`dask.array.map_blocks` and :py:func:`dask.array.atop`, but without
288+
:py:func:`dask.array.map_blocks` and :py:func:`dask.array.blockwise`, but without
289289
requiring an intermediate layer of abstraction.
290290

291291
For the best performance when using Dask's multi-threaded scheduler, wrap a

doc/data-structures.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ Creating a DataArray
4545
The :py:class:`~xarray.DataArray` constructor takes:
4646

4747
- ``data``: a multi-dimensional array of values (e.g., a numpy ndarray,
48-
:py:class:`~pandas.Series`, :py:class:`~pandas.DataFrame` or :py:class:`~pandas.Panel`)
48+
:py:class:`~pandas.Series`, :py:class:`~pandas.DataFrame` or ``pandas.Panel``)
4949
- ``coords``: a list or dictionary of coordinates. If a list, it should be a
5050
list of tuples where the first element is the dimension name and the second
5151
element is the corresponding coordinate array_like object.
@@ -125,7 +125,7 @@ As a dictionary with coords across multiple dimensions:
125125
126126
If you create a ``DataArray`` by supplying a pandas
127127
:py:class:`~pandas.Series`, :py:class:`~pandas.DataFrame` or
128-
:py:class:`~pandas.Panel`, any non-specified arguments in the
128+
``pandas.Panel``, any non-specified arguments in the
129129
``DataArray`` constructor will be filled in from the pandas object:
130130

131131
.. ipython:: python
@@ -301,7 +301,7 @@ names, and its data is aligned to any existing dimensions.
301301

302302
You can also create an dataset from:
303303

304-
- A :py:class:`pandas.DataFrame` or :py:class:`pandas.Panel` along its columns and items
304+
- A :py:class:`pandas.DataFrame` or ``pandas.Panel`` along its columns and items
305305
respectively, by passing it into the :py:class:`~xarray.Dataset` directly
306306
- A :py:class:`pandas.DataFrame` with :py:meth:`Dataset.from_dataframe <xarray.Dataset.from_dataframe>`,
307307
which will additionally handle MultiIndexes See :ref:`pandas`

doc/pandas.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ automatically stacking them into a ``MultiIndex``.
112112
:py:meth:`DataArray.to_pandas() <xarray.DataArray.to_pandas>` is a shortcut that
113113
lets you convert a DataArray directly into a pandas object with the same
114114
dimensionality (i.e., a 1D array is converted to a :py:class:`~pandas.Series`,
115-
2D to :py:class:`~pandas.DataFrame` and 3D to :py:class:`~pandas.Panel`):
115+
2D to :py:class:`~pandas.DataFrame` and 3D to ``pandas.Panel``):
116116

117117
.. ipython:: python
118118

0 commit comments

Comments
 (0)