You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/user-guide/dask.rst
+2-3Lines changed: 2 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -56,7 +56,7 @@ When reading data, Dask divides your dataset into smaller chunks. You can specif
56
56
.. tab:: Zarr
57
57
58
58
The `Zarr <https://zarr.readthedocs.io/en/stable/>`_ format is ideal for working with large datasets. Each chunk is stored in a separate file, allowing parallel reading and writing with Dask. You can also use Zarr to read/write directly from cloud storage buckets (see the `Dask documentation on connecting to remote data <https://docs.dask.org/en/stable/how-to/connect-to-remote-data.html?utm_source=xarray-docs>`__)
59
-
59
+
60
60
When you open a Zarr dataset with :py:func:`~xarray.open_zarr`, it is loaded as a Dask array by default (if Dask is installed)::
61
61
62
62
ds = xr.open_zarr("path/to/directory.zarr")
@@ -81,7 +81,7 @@ When reading data, Dask divides your dataset into smaller chunks. You can specif
81
81
Save larger-than-memory netCDF files::
82
82
83
83
ds.to_netcdf("my-big-file.nc")
84
-
84
+
85
85
Or set ``compute=False`` to return a dask.delayed object that can be computed later::
0 commit comments