Skip to content

Commit dc559ea

Browse files
keewisdcherian
authored andcommitted
Silence sphinx warnings (#3516)
* silence sphinx warnings * silence more sphinx warnings * fix some references * fix the docstrings of Dataset reduce methods * mark the orphaned files as such * silence some nit-picky warnings * convert all references to xray to double backtick quoted text * silence more warnings in whats-new.rst * require a whatsnew format of Name <https://github.com/user> * rename the second cf conventions link * silence more sphinx warnings * get interpolate_na docstrings in sync with master * fix sphinx warnings for interpolate_na docstrings * update references to old documentation sections * cut the link to h5netcdf.File * use the correct reference types for numpy * update the reference to atop (dask renamed it to blockwise) * rewrite numpy docstrings * guard against non-str documentation * pass name to skip_signature * remove links to pandas.Panel * convince sphinx to create pages astype and groupby().quantile * more warnings
1 parent 45fd0e6 commit dc559ea

20 files changed

+229
-159
lines changed

doc/README.rst

+2
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,5 @@
1+
:orphan:
2+
13
xarray
24
------
35

doc/api-hidden.rst

+5
Original file line numberDiff line numberDiff line change
@@ -2,6 +2,8 @@
22
.. This extra page is a work around for sphinx not having any support for
33
.. hiding an autosummary table.
44
5+
:orphan:
6+
57
.. currentmodule:: xarray
68

79
.. autosummary::
@@ -30,9 +32,11 @@
3032
core.groupby.DatasetGroupBy.first
3133
core.groupby.DatasetGroupBy.last
3234
core.groupby.DatasetGroupBy.fillna
35+
core.groupby.DatasetGroupBy.quantile
3336
core.groupby.DatasetGroupBy.where
3437

3538
Dataset.argsort
39+
Dataset.astype
3640
Dataset.clip
3741
Dataset.conj
3842
Dataset.conjugate
@@ -71,6 +75,7 @@
7175
core.groupby.DataArrayGroupBy.first
7276
core.groupby.DataArrayGroupBy.last
7377
core.groupby.DataArrayGroupBy.fillna
78+
core.groupby.DataArrayGroupBy.quantile
7479
core.groupby.DataArrayGroupBy.where
7580

7681
DataArray.argsort

doc/combining.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -255,11 +255,11 @@ Combining along multiple dimensions
255255
``combine_nested``.
256256

257257
For combining many objects along multiple dimensions xarray provides
258-
:py:func:`~xarray.combine_nested`` and :py:func:`~xarray.combine_by_coords`. These
258+
:py:func:`~xarray.combine_nested` and :py:func:`~xarray.combine_by_coords`. These
259259
functions use a combination of ``concat`` and ``merge`` across different
260260
variables to combine many objects into one.
261261

262-
:py:func:`~xarray.combine_nested`` requires specifying the order in which the
262+
:py:func:`~xarray.combine_nested` requires specifying the order in which the
263263
objects should be combined, while :py:func:`~xarray.combine_by_coords` attempts to
264264
infer this ordering automatically from the coordinates in the data.
265265

@@ -310,4 +310,4 @@ These functions can be used by :py:func:`~xarray.open_mfdataset` to open many
310310
files as one dataset. The particular function used is specified by setting the
311311
argument ``'combine'`` to ``'by_coords'`` or ``'nested'``. This is useful for
312312
situations where your data is split across many files in multiple locations,
313-
which have some known relationship between one another.
313+
which have some known relationship between one another.

doc/computation.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -325,8 +325,8 @@ Broadcasting by dimension name
325325
``DataArray`` objects are automatically align themselves ("broadcasting" in
326326
the numpy parlance) by dimension name instead of axis order. With xarray, you
327327
do not need to transpose arrays or insert dimensions of length 1 to get array
328-
operations to work, as commonly done in numpy with :py:func:`np.reshape` or
329-
:py:const:`np.newaxis`.
328+
operations to work, as commonly done in numpy with :py:func:`numpy.reshape` or
329+
:py:data:`numpy.newaxis`.
330330

331331
This is best illustrated by a few examples. Consider two one-dimensional
332332
arrays with different sizes aligned along different dimensions:
@@ -566,7 +566,7 @@ to set ``axis=-1``. As an example, here is how we would wrap
566566
567567
Because ``apply_ufunc`` follows a standard convention for ufuncs, it plays
568568
nicely with tools for building vectorized functions, like
569-
:func:`numpy.broadcast_arrays` and :func:`numpy.vectorize`. For high performance
569+
:py:func:`numpy.broadcast_arrays` and :py:class:`numpy.vectorize`. For high performance
570570
needs, consider using Numba's :doc:`vectorize and guvectorize <numba:user/vectorize>`.
571571

572572
In addition to wrapping functions, ``apply_ufunc`` can automatically parallelize

doc/dask.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -285,7 +285,7 @@ automate `embarrassingly parallel
285285
<https://en.wikipedia.org/wiki/Embarrassingly_parallel>`__ "map" type operations
286286
where a function written for processing NumPy arrays should be repeatedly
287287
applied to xarray objects containing Dask arrays. It works similarly to
288-
:py:func:`dask.array.map_blocks` and :py:func:`dask.array.atop`, but without
288+
:py:func:`dask.array.map_blocks` and :py:func:`dask.array.blockwise`, but without
289289
requiring an intermediate layer of abstraction.
290290

291291
For the best performance when using Dask's multi-threaded scheduler, wrap a

doc/data-structures.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -45,7 +45,7 @@ Creating a DataArray
4545
The :py:class:`~xarray.DataArray` constructor takes:
4646

4747
- ``data``: a multi-dimensional array of values (e.g., a numpy ndarray,
48-
:py:class:`~pandas.Series`, :py:class:`~pandas.DataFrame` or :py:class:`~pandas.Panel`)
48+
:py:class:`~pandas.Series`, :py:class:`~pandas.DataFrame` or ``pandas.Panel``)
4949
- ``coords``: a list or dictionary of coordinates. If a list, it should be a
5050
list of tuples where the first element is the dimension name and the second
5151
element is the corresponding coordinate array_like object.
@@ -125,7 +125,7 @@ As a dictionary with coords across multiple dimensions:
125125
126126
If you create a ``DataArray`` by supplying a pandas
127127
:py:class:`~pandas.Series`, :py:class:`~pandas.DataFrame` or
128-
:py:class:`~pandas.Panel`, any non-specified arguments in the
128+
``pandas.Panel``, any non-specified arguments in the
129129
``DataArray`` constructor will be filled in from the pandas object:
130130

131131
.. ipython:: python
@@ -301,7 +301,7 @@ names, and its data is aligned to any existing dimensions.
301301

302302
You can also create an dataset from:
303303

304-
- A :py:class:`pandas.DataFrame` or :py:class:`pandas.Panel` along its columns and items
304+
- A :py:class:`pandas.DataFrame` or ``pandas.Panel`` along its columns and items
305305
respectively, by passing it into the :py:class:`~xarray.Dataset` directly
306306
- A :py:class:`pandas.DataFrame` with :py:meth:`Dataset.from_dataframe <xarray.Dataset.from_dataframe>`,
307307
which will additionally handle MultiIndexes See :ref:`pandas`

doc/pandas.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ automatically stacking them into a ``MultiIndex``.
112112
:py:meth:`DataArray.to_pandas() <xarray.DataArray.to_pandas>` is a shortcut that
113113
lets you convert a DataArray directly into a pandas object with the same
114114
dimensionality (i.e., a 1D array is converted to a :py:class:`~pandas.Series`,
115-
2D to :py:class:`~pandas.DataFrame` and 3D to :py:class:`~pandas.Panel`):
115+
2D to :py:class:`~pandas.DataFrame` and 3D to ``pandas.Panel``):
116116

117117
.. ipython:: python
118118

0 commit comments

Comments
 (0)