Skip to content

Added PNC backend to xarray #1905

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 59 commits into from
Jun 1, 2018
Merged
Show file tree
Hide file tree
Changes from 43 commits
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
eeeb3a5
Added PNC backend to xarray
barronh Oct 27, 2017
5caea63
Fast Forwarding Branch to Make PNC-updates to enable auto merge
barronh Feb 12, 2018
5ac4b6f
Added whats-new documentation
barronh Feb 12, 2018
f73436d
Updating pnc_ to remove DunderArrayMixin dependency
barronh Feb 13, 2018
9507303
Adding basic tests for pnc
barronh Feb 13, 2018
ef22872
Updating for flake8 compliance
barronh Feb 14, 2018
56f087c
flake does not like unused e
barronh Feb 17, 2018
3c023a0
Merge branch 'master' of https://github.com/pydata/xarray into pnc-ba…
barronh Feb 17, 2018
5a3c62d
Updating pnc to PseudoNetCDF
barronh Mar 7, 2018
8eb427d
Remove outer except
barronh Mar 7, 2018
ca75c76
Updating pnc to PseudoNetCDF
barronh Mar 7, 2018
196c03f
Added open and updated init
barronh Mar 17, 2018
751ba1e
Merging to address indexing
barronh Mar 17, 2018
282408f
Updated indexing and test fix
barronh Mar 17, 2018
b1890b1
Added PseudoNetCDF to doc/io.rst
barronh Mar 20, 2018
eda629f
Changing test subtype
barronh Mar 20, 2018
816c7da
Changing test subtype
barronh Mar 20, 2018
c8b2ca3
pnc test case requires netcdf3only
barronh Mar 20, 2018
85ac334
adding backend_kwargs default as dict
barronh Mar 24, 2018
c46caeb
Upgrading tests to CFEncodedDataTest
barronh Mar 24, 2018
6838885
Not currently supporting autoclose
barronh Mar 24, 2018
c3b7c82
Minor updates for flake8
barronh Mar 24, 2018
7906492
Explicit skipping
barronh Mar 25, 2018
4df9fba
removing trailing whitespace from pytest skip
barronh Mar 27, 2018
e4900ab
Merge branch 'master' of https://github.com/pydata/xarray into pnc-ba…
barronh Mar 29, 2018
ec95a3a
Adding pip support
barronh Apr 3, 2018
ad7b709
Addressing comments
barronh Apr 14, 2018
26dd0f9
Bypassing pickle, mask/scale, and object
barronh Apr 15, 2018
d999de1
Added uamiv test
barronh Apr 15, 2018
87e8612
Adding support for autoclose
barronh Apr 15, 2018
dd94be5
Adding bakcend_kwargs to all backends
barronh Apr 15, 2018
2311701
Small tweaks to PNC backend
shoyer Apr 16, 2018
9791b8a
Merge branch 'master' into pnc-backend
shoyer Apr 16, 2018
1d7ad4a
remove warning and update whats-new
barronh Apr 18, 2018
229715a
Merged so that whats-new could be updated
barronh Apr 18, 2018
68997e0
Separating isntall and io pnc doc and updating whats new
barronh Apr 18, 2018
d007bc6
merging renames
barronh Apr 18, 2018
70968ca
fixing line length in test
barronh Apr 18, 2018
c2788b2
updating whats-new and merging
barronh Apr 21, 2018
1f3287e
Tests now use non-netcdf files
barronh Apr 28, 2018
abacc1d
Removing unknown meta-data netcdf support.
barronh Apr 28, 2018
a136ea3
Merge branch 'master' of https://github.com/pydata/xarray into pnc-ba…
barronh Apr 28, 2018
7d8a8ee
flake8 cleanup
barronh Apr 28, 2018
24c8376
Using python 2 and 3 compat testing
barronh Apr 28, 2018
214f51c
Disabling mask_and_scale by default
barronh Apr 28, 2018
5786291
consistent with 3.0.0
barronh May 2, 2018
066cdd5
Updating readers and line length
barronh May 2, 2018
9231e3f
Updating readers and line length
barronh May 2, 2018
80d03a7
Updating readers and line length
barronh May 2, 2018
d2c01de
Adding open_mfdataset test
barronh May 13, 2018
e12288d
merging and updating time test
barronh May 13, 2018
a179c25
Merge branch 'master' of https://github.com/pydata/xarray into pnc-ba…
barronh May 22, 2018
eaa37fe
Using conda version of PseudoNetCDF
barronh May 30, 2018
590e919
Removing xfail for netcdf
barronh May 30, 2018
0df1e60
Merge branch 'master' of https://github.com/pydata/xarray into pnc-ba…
barronh May 30, 2018
989fa4b
Moving pseudonetcdf to v0.15
barronh May 30, 2018
d71bb60
Updating what's new
barronh May 30, 2018
b9b64ca
Fixing open_dataarray CF options
barronh May 30, 2018
10c9bfa
Merge branch 'master' into pnc-backend
shoyer Jun 1, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions ci/requirements-py36.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,3 +25,4 @@ dependencies:
- pytest-cov
- pydap
- lxml
- PseudoNetCDF
7 changes: 5 additions & 2 deletions doc/installing.rst
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,9 @@ For netCDF and IO
- `cftime <https://unidata.github.io/cftime>`__: recommended if you
want to encode/decode datetimes for non-standard calendars or dates before
year 1678 or after year 2262.
- `PseudoNetCDF <http://github.com/barronh/pseudonetcdf/>`__: recommended
for accessing CAMx, GEOS-Chem (bpch), NOAA ARL files, ICARTT files
(ffi1001) and many other.

For accelerating xarray
~~~~~~~~~~~~~~~~~~~~~~~
Expand Down Expand Up @@ -65,9 +68,9 @@ with its recommended dependencies using the conda command line tool::

.. _conda: http://conda.io/

We recommend using the community maintained `conda-forge <https://conda-forge.github.io/>`__ channel if you need difficult\-to\-build dependencies such as cartopy or pynio::
We recommend using the community maintained `conda-forge <https://conda-forge.github.io/>`__ channel if you need difficult\-to\-build dependencies such as cartopy, pynio or PseudoNetCDF::

$ conda install -c conda-forge xarray cartopy pynio
$ conda install -c conda-forge xarray cartopy pynio pseudonetcdf

New releases may also appear in conda-forge before being updated in the default
channel.
Expand Down
23 changes: 22 additions & 1 deletion doc/io.rst
Original file line number Diff line number Diff line change
Expand Up @@ -650,7 +650,26 @@ We recommend installing PyNIO via conda::

.. _PyNIO: https://www.pyngl.ucar.edu/Nio.shtml

.. _combining multiple files:
.. _io.PseudoNetCDF:

Formats supported by PseudoNetCDF
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you move this after PyNIO and before pandas? I think that would make a little more logical sense.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure.

---------------------------------

xarray can also read CAMx, BPCH, ARL PACKED BIT, and many other file
formats supported by PseudoNetCDF_, if PseudoNetCDF is installed.
PseudoNetCDF can also provide Climate Forecasting Conventions to
CMAQ files. In addition, PseudoNetCDF can automatically register custom
readers that subclass PseudoNetCDF.PseudoNetCDFFile. PseudoNetCDF can
identify readers heuristically, or format can be specified via a key in
`backend_kwargs`.

To use PseudoNetCDF to read such files, supply
``engine='pseudonetcdf'`` to :py:func:`~xarray.open_dataset`.

Add ``backend_kwargs={'format': '<format name>'}`` where `<format name>`
options are listed on the PseudoNetCDF page.

.. _PseuodoNetCDF: http://github.com/barronh/PseudoNetCDF


Formats supported by Pandas
Expand All @@ -662,6 +681,8 @@ exporting your objects to pandas and using its broad range of `IO tools`_.
.. _IO tools: http://pandas.pydata.org/pandas-docs/stable/io.html


.. _combining multiple files:


Combining multiple files
------------------------
Expand Down
25 changes: 10 additions & 15 deletions doc/whats-new.rst
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,15 @@ What's New
v0.10.4 (unreleased)
--------------------

Documentation
~~~~~~~~~~~~~

Enhancements
~~~~~~~~~~~~

- added a PseudoNetCDF backend for many Atmospheric data formats including
GEOS-Chem, CAMx, NOAA arlpacked bit and many others.
By `Barron Henderson <https://github.com/barronh>`_.
- Support writing lists of strings as netCDF attributes (:issue:`2044`).
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please fix -- you are inadvertently editing release notes for 0.10.3.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry. Fixed it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

By `Dan Nowacki <https://github.com/dnowacki-usgs>`_.

Expand All @@ -45,10 +51,11 @@ Bug fixes

.. _whats-new.0.10.3:

v0.10.3 (April 13, 2018)
------------------------
v0.10.3 (unreleased)
--------------------

The minor release includes a number of bug-fixes and backwards compatible enhancements.
Documentation
~~~~~~~~~~~~~

Enhancements
~~~~~~~~~~~~
Expand All @@ -75,21 +82,9 @@ Enhancements
Bug fixes
~~~~~~~~~

- Fixed ``decode_cf`` function to operate lazily on dask arrays
(:issue:`1372`). By `Ryan Abernathey <https://github.com/rabernat>`_.
- Fixed labeled indexing with slice bounds given by xarray objects with
datetime64 or timedelta64 dtypes (:issue:`1240`).
By `Stephan Hoyer <https://github.com/shoyer>`_.
- Attempting to convert an xarray.Dataset into a numpy array now raises an
informative error message.
By `Stephan Hoyer <https://github.com/shoyer>`_.
- Fixed a bug in decode_cf_datetime where ``int32`` arrays weren't parsed
correctly (:issue:`2002`).
By `Fabien Maussion <https://github.com/fmaussion>`_.
- When calling `xr.auto_combine()` or `xr.open_mfdataset()` with a `concat_dim`,
the resulting dataset will have that one-element dimension (it was
silently dropped, previously) (:issue:`1988`).
By `Ben Root <https://github.com/WeatherGod>`_.

.. _whats-new.0.10.2:

Expand Down
2 changes: 2 additions & 0 deletions xarray/backends/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
from .pynio_ import NioDataStore
from .scipy_ import ScipyDataStore
from .h5netcdf_ import H5NetCDFStore
from .pseudonetcdf_ import PseudoNetCDFDataStore
from .zarr import ZarrStore

__all__ = [
Expand All @@ -21,4 +22,5 @@
'ScipyDataStore',
'H5NetCDFStore',
'ZarrStore',
'PseudoNetCDFDataStore',
]
41 changes: 32 additions & 9 deletions xarray/backends/api.py
Original file line number Diff line number Diff line change
Expand Up @@ -147,7 +147,8 @@ def _get_lock(engine, scheduler, format, path_or_file):
def open_dataset(filename_or_obj, group=None, decode_cf=True,
mask_and_scale=True, decode_times=True, autoclose=False,
concat_characters=True, decode_coords=True, engine=None,
chunks=None, lock=None, cache=None, drop_variables=None):
chunks=None, lock=None, cache=None, drop_variables=None,
backend_kwargs=None):
"""Load and decode a dataset from a file or file-like object.

Parameters
Expand Down Expand Up @@ -187,7 +188,7 @@ def open_dataset(filename_or_obj, group=None, decode_cf=True,
decode_coords : bool, optional
If True, decode the 'coordinates' attribute to identify coordinates in
the resulting dataset.
engine : {'netcdf4', 'scipy', 'pydap', 'h5netcdf', 'pynio'}, optional
engine : {'netcdf4', 'scipy', 'pydap', 'h5netcdf', 'pynio', 'pseudonetcdf'}, optional
Engine to use when reading files. If not provided, the default engine
is chosen based on available dependencies, with a preference for
'netcdf4'.
Expand All @@ -212,6 +213,10 @@ def open_dataset(filename_or_obj, group=None, decode_cf=True,
A variable or list of variables to exclude from being parsed from the
dataset. This may be useful to drop variables with problems or
inconsistent values.
backend_kwargs: dictionary, optional
A dictionary of keyword arguments to pass on to the backend. This
may be useful when backend options would improve performance or
allow user control of dataset processing.

Returns
-------
Expand All @@ -231,6 +236,9 @@ def open_dataset(filename_or_obj, group=None, decode_cf=True,
if cache is None:
cache = chunks is None

if backend_kwargs is None:
backend_kwargs = {}

def maybe_decode_store(store, lock=False):
ds = conventions.decode_cf(
store, mask_and_scale=mask_and_scale, decode_times=decode_times,
Expand Down Expand Up @@ -296,18 +304,26 @@ def maybe_decode_store(store, lock=False):
if engine == 'netcdf4':
store = backends.NetCDF4DataStore.open(filename_or_obj,
group=group,
autoclose=autoclose)
autoclose=autoclose,
**backend_kwargs)
elif engine == 'scipy':
store = backends.ScipyDataStore(filename_or_obj,
autoclose=autoclose)
autoclose=autoclose,
**backend_kwargs)
elif engine == 'pydap':
store = backends.PydapDataStore.open(filename_or_obj)
store = backends.PydapDataStore.open(filename_or_obj,
**backend_kwargs)
elif engine == 'h5netcdf':
store = backends.H5NetCDFStore(filename_or_obj, group=group,
autoclose=autoclose)
autoclose=autoclose,
**backend_kwargs)
elif engine == 'pynio':
store = backends.NioDataStore(filename_or_obj,
autoclose=autoclose)
autoclose=autoclose,
**backend_kwargs)
elif engine == 'pseudonetcdf':
store = backends.PseudoNetCDFDataStore.open(
filename_or_obj, autoclose=autoclose, **backend_kwargs)
else:
raise ValueError('unrecognized engine for open_dataset: %r'
% engine)
Expand All @@ -329,7 +345,8 @@ def maybe_decode_store(store, lock=False):
def open_dataarray(filename_or_obj, group=None, decode_cf=True,
mask_and_scale=True, decode_times=True, autoclose=False,
concat_characters=True, decode_coords=True, engine=None,
chunks=None, lock=None, cache=None, drop_variables=None):
chunks=None, lock=None, cache=None, drop_variables=None,
backend_kwargs=None):
"""Open an DataArray from a netCDF file containing a single data variable.

This is designed to read netCDF files with only one data variable. If
Expand Down Expand Up @@ -396,6 +413,10 @@ def open_dataarray(filename_or_obj, group=None, decode_cf=True,
A variable or list of variables to exclude from being parsed from the
dataset. This may be useful to drop variables with problems or
inconsistent values.
backend_kwargs: dictionary, optional
A dictionary of keyword arguments to pass on to the backend. This
may be useful when backend options would improve performance or
allow user control of dataset processing.

Notes
-----
Expand All @@ -410,13 +431,15 @@ def open_dataarray(filename_or_obj, group=None, decode_cf=True,
--------
open_dataset
"""

dataset = open_dataset(filename_or_obj, group=group, decode_cf=decode_cf,
mask_and_scale=mask_and_scale,
decode_times=decode_times, autoclose=autoclose,
concat_characters=concat_characters,
decode_coords=decode_coords, engine=engine,
chunks=chunks, lock=lock, cache=cache,
drop_variables=drop_variables)
drop_variables=drop_variables,
backend_kwargs=backend_kwargs)

if len(dataset.data_vars) != 1:
raise ValueError('Given file dataset contains more than one data '
Expand Down
120 changes: 120 additions & 0 deletions xarray/backends/pseudonetcdf_.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,120 @@
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import functools

import numpy as np

from .. import Variable
from ..core.pycompat import OrderedDict
from ..core.utils import (FrozenOrderedDict, Frozen)
from ..core import indexing

from .common import AbstractDataStore, DataStorePickleMixin, BackendArray


class PncArrayWrapper(BackendArray):

def __init__(self, variable_name, datastore):
self.datastore = datastore
self.variable_name = variable_name
array = self.get_array()
self.shape = array.shape
self.dtype = np.dtype(array.dtype)

def get_array(self):
self.datastore.assert_open()
return self.datastore.ds.variables[self.variable_name]

def __getitem__(self, key):
key, np_inds = indexing.decompose_indexer(
key, self.shape, indexing.IndexingSupport.OUTER_1VECTOR)

with self.datastore.ensure_open(autoclose=True):
array = self.get_array()[key.tuple] # index backend array

if len(np_inds.tuple) > 0:
# index the loaded np.ndarray
array = indexing.NumpyIndexingAdapter(array)[np_inds]
return array


_genericncf = ('Dataset', 'netcdf', 'ncf', 'nc')


class _notnetcdf:
def __eq__(self, lhs):
return lhs not in _genericncf


class PseudoNetCDFDataStore(AbstractDataStore, DataStorePickleMixin):
"""Store for accessing datasets via PseudoNetCDF
"""
@classmethod
def open(cls, filename, format=None, writer=None,
autoclose=False, **format_kwds):
from PseudoNetCDF._getreader import getreader, getreaderdict
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Importing from private APIs (with the underscore prefix) makes me nervous. In general we try to avoid doing this because private APIs can change at any time, though of course that isn't true here because you're the PNC maintainer. I'm also nervous about including constants like _genericncf = ('Dataset', 'netcdf', 'ncf', 'nc') in xarray because that feels like logic that should live in libraries like PNC instead.

So maybe we should stick with the standard open method that you were using earlier. In particular, if we change the default to decode_cf=False for PNC in open_dataset() then I don't think it's essential to check the format.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it will be confusing for users to have an indirect use of netCDF4 via PseudoNetCDF that responds differently than the typical backend. This code is designed to address that xarray-specific issue. Typical behavior in PNC is to mask_and_scale via netCDF4's default functionality, so with decode_cf a netCDF file would still be decoded. This is confusing to me, let alone a user. For this reason, I think it is right to exclude formats that already have specifically optimized backends.

I agree completely about private API. The standard pncopen, however, does not currently support the ability to enable a subset of readers. I will add this a feature request to PseudoNetCDF, and make a subsequent update there. As soon as I update pncopen, I would make the update here.

The _genericncf is not logic that should live in PNC. It is logic designed specifically to address an xarray issue. In this case, xarray has a feature that should pre-empt PNC typical capabilities.

The alternative is to update PNC, update the conda installation, and update this code again.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are you thinking of someone explicitly calling xarray.decode_cf() on a dataset loaded from PNC? I agree that that would give unexpected results, but I don't see that being done very often.

Another way to make all of this safe (and which wouldn't require disabling mask_and_scale in open_dataset()) would be to explicitly drop _FillValue, missing_value, add_offset and scale_factor attributes from variables loaded with PNC, moving them to Variable.encoding instead.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Imagine that you have a netcdf file. If you open it with PNC, it uses netCDF4 as the library to open it. In xarray, however, you might get a different processing. If you write it out, it might have different properties. This seems undesirable at best. That is why I prefer removing the netCDF functionality.

In PNC, the goal of using netCDF4 is to make sure all pnc functionality is available for any type of file. In PNC, some plotting and processing tools are available (similar to xarray). I think plotting and processing in xarray is often better than PNC. The goal of adding PNC to xarray is to bring xarray formats that PNC can read. Since xarray already has mechanisms for supporting netCDF, I think bringing this functionality is unnecessary.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My general feeling is that we should make it easy to do the right thing, but not put up artificial barriers to stop expert users. I think this is broadly consistent with most Python projects, including the Python language itself. Even if it's strange, somebody might have reasons for trying PNC/netCDF4 together, and as long as it will not give wrong results there is no reason to disable it.

That said, my real objection here are all the private imports from PNC, and the four different aliases we use to check for netCDF files. If you want to put a public API version of this in PNC before merging this PR, that would be OK with me.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay. I removed the type checking and private imports. I updated the testing.

I get one failed that I cannot identify as related to PNC: test_open_mfdataset_manyfiles files with:

E           OSError: [Errno 24] Too many open files: '/var/folders/g2/hwlpd21j4vl8hvldb4tyrz400000gn/T/tmp0bteny75'

I'm going to run it through the build to see if it is just my setup.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It didn't fail on the automated test, so all passing [x] and no private imports [x].

readerdict = getreaderdict()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

F841 local variable 'readerdict' is assigned to but never used

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

F841 local variable 'readerdict' is assigned to but never used

reader = getreader(filename, format=format, **format_kwds)
_genreaders = tuple([readerdict[rn] for rn in _genericncf])
if isinstance(reader, _genreaders):
raise ValueError(('In xarray, PseudoNetCDF should not be used ' +
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can avoid this check/error message if we default to decode_cf=False for PNC in open_dataset().

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see previous comment.

'to read netcdf files with unknown metadata. ' +
'Instead, use netcdf4. If this is a known ' +
'format, specify it using the format keyword ' +
'(or backend_kwargs={\'format\': <name>} from ' +
'open_dataset).'))

opener = functools.partial(reader, filename, **format_kwds)
ds = opener()
mode = format_kwds.get('mode', 'r')
return cls(ds, mode=mode, writer=writer, opener=opener,
autoclose=autoclose)

def __init__(self, pnc_dataset, mode='r', writer=None, opener=None,
autoclose=False):

if autoclose and opener is None:
raise ValueError('autoclose requires an opener')

self._ds = pnc_dataset
self._autoclose = autoclose
self._isopen = True
self._opener = opener
self._mode = mode
super(PseudoNetCDFDataStore, self).__init__()

def open_store_variable(self, name, var):
with self.ensure_open(autoclose=False):
data = indexing.LazilyOuterIndexedArray(
PncArrayWrapper(name, self)
)
attrs = OrderedDict((k, getattr(var, k)) for k in var.ncattrs())
return Variable(var.dimensions, data, attrs)

def get_variables(self):
with self.ensure_open(autoclose=False):
return FrozenOrderedDict((k, self.open_store_variable(k, v))
for k, v in self.ds.variables.items())

def get_attrs(self):
with self.ensure_open(autoclose=True):
return Frozen(dict([(k, getattr(self.ds, k))
for k in self.ds.ncattrs()]))

def get_dimensions(self):
with self.ensure_open(autoclose=True):
return Frozen(self.ds.dimensions)

def get_encoding(self):
encoding = {}
encoding['unlimited_dims'] = set(
[k for k in self.ds.dimensions
if self.ds.dimensions[k].isunlimited()])
return encoding

def close(self):
if self._isopen:
self.ds.close()
self._isopen = False
1 change: 1 addition & 0 deletions xarray/tests/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ def _importorskip(modname, minversion=None):
has_netCDF4, requires_netCDF4 = _importorskip('netCDF4')
has_h5netcdf, requires_h5netcdf = _importorskip('h5netcdf')
has_pynio, requires_pynio = _importorskip('Nio')
has_pseudonetcdf, requires_pseudonetcdf = _importorskip('PseudoNetCDF')
has_cftime, requires_cftime = _importorskip('cftime')
has_dask, requires_dask = _importorskip('dask')
has_bottleneck, requires_bottleneck = _importorskip('bottleneck')
Expand Down
Loading