Skip to content

Commit b690248

Browse files
[pre-commit.ci] auto fixes from pre-commit.com hooks
for more information, see https://pre-commit.ci
1 parent ce09268 commit b690248

File tree

1 file changed

+2
-3
lines changed

1 file changed

+2
-3
lines changed

doc/user-guide/dask.rst

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -56,7 +56,7 @@ When reading data, Dask divides your dataset into smaller chunks. You can specif
5656
.. tab:: Zarr
5757

5858
The `Zarr <https://zarr.readthedocs.io/en/stable/>`_ format is ideal for working with large datasets. Each chunk is stored in a separate file, allowing parallel reading and writing with Dask. You can also use Zarr to read/write directly from cloud storage buckets (see the `Dask documentation on connecting to remote data <https://docs.dask.org/en/stable/how-to/connect-to-remote-data.html?utm_source=xarray-docs>`__)
59-
59+
6060
When you open a Zarr dataset with :py:func:`~xarray.open_zarr`, it is loaded as a Dask array by default (if Dask is installed)::
6161

6262
ds = xr.open_zarr("path/to/directory.zarr")
@@ -81,7 +81,7 @@ When reading data, Dask divides your dataset into smaller chunks. You can specif
8181
Save larger-than-memory netCDF files::
8282

8383
ds.to_netcdf("my-big-file.nc")
84-
84+
8585
Or set ``compute=False`` to return a dask.delayed object that can be computed later::
8686

8787
delayed_write = ds.to_netcdf("my-big-file.nc", compute=False)
@@ -494,4 +494,3 @@ Here's an example of a simplified workflow putting some of these tips together:
494494
)
495495
496496
zonal_mean.load() # Pull smaller results into memory after reducing the dataset
497-

0 commit comments

Comments
 (0)