Skip to content

Commit 0f91f05

Browse files
authored
Enable running sphinx-build on Windows (#6237)
1 parent 555a70e commit 0f91f05

File tree

8 files changed

+53
-32
lines changed

8 files changed

+53
-32
lines changed

.gitignore

+2-2
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,9 @@ __pycache__
55
.hypothesis/
66

77
# temp files from docs build
8+
doc/*.nc
89
doc/auto_gallery
9-
doc/example.nc
10+
doc/rasm.zarr
1011
doc/savefig
1112

1213
# C extensions
@@ -72,4 +73,3 @@ xarray/tests/data/*.grib.*.idx
7273
Icon*
7374

7475
.ipynb_checkpoints
75-
doc/rasm.zarr

doc/conf.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -28,9 +28,9 @@
2828
print("python exec:", sys.executable)
2929
print("sys.path:", sys.path)
3030

31-
if "conda" in sys.executable:
31+
if "CONDA_DEFAULT_ENV" in os.environ or "conda" in sys.executable:
3232
print("conda environment:")
33-
subprocess.run(["conda", "list"])
33+
subprocess.run([os.environ.get("CONDA_EXE", "conda"), "list"])
3434
else:
3535
print("pip environment:")
3636
subprocess.run([sys.executable, "-m", "pip", "list"])

doc/getting-started-guide/quick-overview.rst

+3-1
Original file line numberDiff line numberDiff line change
@@ -215,13 +215,15 @@ You can directly read and write xarray objects to disk using :py:meth:`~xarray.D
215215
.. ipython:: python
216216
217217
ds.to_netcdf("example.nc")
218-
xr.open_dataset("example.nc")
218+
reopened = xr.open_dataset("example.nc")
219+
reopened
219220
220221
.. ipython:: python
221222
:suppress:
222223
223224
import os
224225
226+
reopened.close()
225227
os.remove("example.nc")
226228
227229

doc/internals/zarr-encoding-spec.rst

+7
Original file line numberDiff line numberDiff line change
@@ -63,3 +63,10 @@ re-open it directly with Zarr:
6363
print(os.listdir("rasm.zarr"))
6464
print(zgroup.tree())
6565
dict(zgroup["Tair"].attrs)
66+
67+
.. ipython:: python
68+
:suppress:
69+
70+
import shutil
71+
72+
shutil.rmtree("rasm.zarr")

doc/user-guide/dask.rst

+13-13
Original file line numberDiff line numberDiff line change
@@ -55,6 +55,8 @@ argument to :py:func:`~xarray.open_dataset` or using the
5555
.. ipython:: python
5656
:suppress:
5757
58+
import os
59+
5860
import numpy as np
5961
import pandas as pd
6062
import xarray as xr
@@ -129,6 +131,11 @@ will return a ``dask.delayed`` object that can be computed later.
129131
with ProgressBar():
130132
results = delayed_obj.compute()
131133
134+
.. ipython:: python
135+
:suppress:
136+
137+
os.remove("manipulated-example-data.nc") # Was not opened.
138+
132139
.. note::
133140

134141
When using Dask's distributed scheduler to write NETCDF4 files,
@@ -147,13 +154,6 @@ A dataset can also be converted to a Dask DataFrame using :py:meth:`~xarray.Data
147154
148155
Dask DataFrames do not support multi-indexes so the coordinate variables from the dataset are included as columns in the Dask DataFrame.
149156

150-
.. ipython:: python
151-
:suppress:
152-
153-
import os
154-
155-
os.remove("example-data.nc")
156-
os.remove("manipulated-example-data.nc")
157157

158158
Using Dask with xarray
159159
----------------------
@@ -210,7 +210,7 @@ Dask arrays using the :py:meth:`~xarray.Dataset.persist` method:
210210

211211
.. ipython:: python
212212
213-
ds = ds.persist()
213+
persisted = ds.persist()
214214
215215
:py:meth:`~xarray.Dataset.persist` is particularly useful when using a
216216
distributed cluster because the data will be loaded into distributed memory
@@ -232,11 +232,6 @@ chunk size depends both on your data and on the operations you want to perform.
232232
With xarray, both converting data to a Dask arrays and converting the chunk
233233
sizes of Dask arrays is done with the :py:meth:`~xarray.Dataset.chunk` method:
234234

235-
.. ipython:: python
236-
:suppress:
237-
238-
ds = ds.chunk({"time": 10})
239-
240235
.. ipython:: python
241236
242237
rechunked = ds.chunk({"latitude": 100, "longitude": 100})
@@ -508,6 +503,11 @@ Notice that the 0-shaped sizes were not printed to screen. Since ``template`` ha
508503
expected = ds + 10 + 10
509504
mapped.identical(expected)
510505
506+
.. ipython:: python
507+
:suppress:
508+
509+
ds.close() # Closes "example-data.nc".
510+
os.remove("example-data.nc")
511511
512512
.. tip::
513513

doc/user-guide/io.rst

+20-13
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@ format (recommended).
1111
.. ipython:: python
1212
:suppress:
1313
14+
import os
15+
1416
import numpy as np
1517
import pandas as pd
1618
import xarray as xr
@@ -84,6 +86,13 @@ We can load netCDF files to create a new Dataset using
8486
ds_disk = xr.open_dataset("saved_on_disk.nc")
8587
ds_disk
8688
89+
.. ipython:: python
90+
:suppress:
91+
92+
# Close "saved_on_disk.nc", but retain the file until after closing or deleting other
93+
# datasets that will refer to it.
94+
ds_disk.close()
95+
8796
Similarly, a DataArray can be saved to disk using the
8897
:py:meth:`DataArray.to_netcdf` method, and loaded
8998
from disk using the :py:func:`open_dataarray` function. As netCDF files
@@ -204,11 +213,6 @@ You can view this encoding information (among others) in the
204213
Note that all operations that manipulate variables other than indexing
205214
will remove encoding information.
206215

207-
.. ipython:: python
208-
:suppress:
209-
210-
ds_disk.close()
211-
212216

213217
.. _combining multiple files:
214218

@@ -484,13 +488,13 @@ and currently raises a warning unless ``invalid_netcdf=True`` is set:
484488
da.to_netcdf("complex.nc", engine="h5netcdf", invalid_netcdf=True)
485489
486490
# Reading it back
487-
xr.open_dataarray("complex.nc", engine="h5netcdf")
491+
reopened = xr.open_dataarray("complex.nc", engine="h5netcdf")
492+
reopened
488493
489494
.. ipython:: python
490495
:suppress:
491496
492-
import os
493-
497+
reopened.close()
494498
os.remove("complex.nc")
495499
496500
.. warning::
@@ -989,16 +993,19 @@ To export just the dataset schema without the data itself, use the
989993
990994
ds.to_dict(data=False)
991995
992-
This can be useful for generating indices of dataset contents to expose to
993-
search indices or other automated data discovery tools.
994-
995996
.. ipython:: python
996997
:suppress:
997998
998-
import os
999-
999+
# We're now done with the dataset named `ds`. Although the `with` statement closed
1000+
# the dataset, displaying the unpickled pickle of `ds` re-opened "saved_on_disk.nc".
1001+
# However, `ds` (rather than the unpickled dataset) refers to the open file. Delete
1002+
# `ds` to close the file.
1003+
del ds
10001004
os.remove("saved_on_disk.nc")
10011005
1006+
This can be useful for generating indices of dataset contents to expose to
1007+
search indices or other automated data discovery tools.
1008+
10021009
.. _io.rasterio:
10031010

10041011
Rasterio

doc/user-guide/weather-climate.rst

+3-1
Original file line numberDiff line numberDiff line change
@@ -218,13 +218,15 @@ For data indexed by a :py:class:`~xarray.CFTimeIndex` xarray currently supports:
218218
.. ipython:: python
219219
220220
da.to_netcdf("example-no-leap.nc")
221-
xr.open_dataset("example-no-leap.nc")
221+
reopened = xr.open_dataset("example-no-leap.nc")
222+
reopened
222223
223224
.. ipython:: python
224225
:suppress:
225226
226227
import os
227228
229+
reopened.close()
228230
os.remove("example-no-leap.nc")
229231
230232
- And resampling along the time dimension for data indexed by a :py:class:`~xarray.CFTimeIndex`:

doc/whats-new.rst

+3
Original file line numberDiff line numberDiff line change
@@ -57,6 +57,9 @@ Bug fixes
5757
Documentation
5858
~~~~~~~~~~~~~
5959

60+
- Delete files of datasets saved to disk while building the documentation and enable
61+
building on Windows via `sphinx-build` (:pull:`6237`).
62+
By `Stan West <https://github.com/stanwest>`_.
6063

6164

6265
Internal Changes

0 commit comments

Comments
 (0)