@@ -425,15 +425,19 @@ def open_dataset(
425
425
is chosen based on available dependencies, with a preference for
426
426
"netcdf4". A custom backend class (a subclass of ``BackendEntrypoint``)
427
427
can also be used.
428
- chunks : int, dict, 'auto' or None, optional
429
- If chunks is provided, it is used to load the new dataset into dask
430
- arrays. ``chunks=-1`` loads the dataset with dask using a single
431
- chunk for all arrays. ``chunks={}`` loads the dataset with dask using
432
- engine preferred chunks if exposed by the backend, otherwise with
433
- a single chunk for all arrays. In order to reproduce the default behavior
434
- of ``xr.open_zarr(...)`` use ``xr.open_dataset(..., engine='zarr', chunks={})``.
435
- ``chunks='auto'`` will use dask ``auto`` chunking taking into account the
436
- engine preferred chunks. See dask chunking for more details.
428
+ chunks : int, dict, 'auto' or None, default: None
429
+ If provided, used to load the data into dask arrays.
430
+
431
+ - ``chunks="auto"`` will use dask ``auto`` chunking taking into account the
432
+ engine preferred chunks.
433
+ - ``chunks=None`` skips using dask, which is generally faster for
434
+ small arrays.
435
+ - ``chunks=-1`` loads the data with dask using a single chunk for all arrays.
436
+ - ``chunks={}`` loads the data with dask using the engine's preferred chunk
437
+ size, generally identical to the format's chunk size. If not available, a
438
+ single chunk for all arrays.
439
+
440
+ See dask chunking for more details.
437
441
cache : bool, optional
438
442
If True, cache data loaded from the underlying datastore in memory as
439
443
NumPy arrays when accessed to avoid reading from the underlying data-
@@ -631,14 +635,19 @@ def open_dataarray(
631
635
Engine to use when reading files. If not provided, the default engine
632
636
is chosen based on available dependencies, with a preference for
633
637
"netcdf4".
634
- chunks : int, dict, 'auto' or None, optional
635
- If chunks is provided, it is used to load the new dataset into dask
636
- arrays. ``chunks=-1`` loads the dataset with dask using a single
637
- chunk for all arrays. `chunks={}`` loads the dataset with dask using
638
- engine preferred chunks if exposed by the backend, otherwise with
639
- a single chunk for all arrays.
640
- ``chunks='auto'`` will use dask ``auto`` chunking taking into account the
641
- engine preferred chunks. See dask chunking for more details.
638
+ chunks : int, dict, 'auto' or None, default: None
639
+ If provided, used to load the data into dask arrays.
640
+
641
+ - ``chunks='auto'`` will use dask ``auto`` chunking taking into account the
642
+ engine preferred chunks.
643
+ - ``chunks=None`` skips using dask, which is generally faster for
644
+ small arrays.
645
+ - ``chunks=-1`` loads the data with dask using a single chunk for all arrays.
646
+ - ``chunks={}`` loads the data with dask using engine preferred chunks if
647
+ exposed by the backend, otherwise with a single chunk for all arrays.
648
+
649
+ See dask chunking for more details.
650
+
642
651
cache : bool, optional
643
652
If True, cache data loaded from the underlying datastore in memory as
644
653
NumPy arrays when accessed to avoid reading from the underlying data-
0 commit comments