Skip to content

Commit 8167921

Browse files
committed
Squashed commit of the following:
commit b0c336f Author: Maximilian Roos <[email protected]> Date: Mon Oct 21 04:52:36 2019 -0400 Whatsnew for #3419 (#3422) * pyupgrade --py36-plus * Update xarray/core/nputils.py Co-Authored-By: crusaderky <[email protected]> * Update xarray/core/parallel.py Co-Authored-By: crusaderky <[email protected]> * Update xarray/tests/test_cftime_offsets.py Co-Authored-By: crusaderky <[email protected]> * Update xarray/tests/test_cftime_offsets.py Co-Authored-By: crusaderky <[email protected]> * whatsnew commit 2984415 Author: Rhys Doyle <[email protected]> Date: Mon Oct 21 01:17:47 2019 +0100 Revert changes made in #3358 (#3411) * Revert #3358 * Revision * Code review * Merge from master * Obsolescence note commit 3c462b9 Author: Maximilian Roos <[email protected]> Date: Sun Oct 20 20:16:58 2019 -0400 Python3.6 idioms (#3419) * pyupgrade --py36-plus * Update xarray/core/nputils.py Co-Authored-By: crusaderky <[email protected]> * Update xarray/core/parallel.py Co-Authored-By: crusaderky <[email protected]> * Update xarray/tests/test_cftime_offsets.py Co-Authored-By: crusaderky <[email protected]> * Update xarray/tests/test_cftime_offsets.py Co-Authored-By: crusaderky <[email protected]> commit 9886e3c Author: crusaderky <[email protected]> Date: Sun Oct 20 23:42:36 2019 +0100 Temporarily mark pseudonetcdf-3.1 as incompatible (#3420) commit 0f7ab0e Author: Dan Nowacki <[email protected]> Date: Thu Oct 17 14:13:44 2019 -0700 Fix and add test for groupby_bins() isnan TypeError. (#3405) * Fix and add test for groupby_bins() isnan TypeError. * Better testing * black commit 6cd50cc Author: pmallas <[email protected]> Date: Thu Oct 17 10:15:47 2019 -0400 Update where docstring to make return value type more clear (#3408) * Update where docstring to make return value type more clear The where docstring was a little confusing to me. I misunderstood "Same type as caller' to mean the values in the xarray not the xarray itself. I think this small change will clean it up for most users. Thanks. * Update xarray/core/common.py Co-Authored-By: Maximilian Roos <[email protected]> commit 55b1ac0 Author: keewis <[email protected]> Date: Thu Oct 17 05:13:39 2019 +0200 tests for arrays with units (#3238) * create the empty test file * add tests for data array aggregation functions * include pint in the ci * ignore missing type annotations for pint * really skip the tests if pint is not available * remove the reason from the importorskip call * test that the dataarray constructor does not strip the unit * convert every unit stripped warning to an error * work around pint not implementing np.allclose yet * remove the now unnecessary filterwarnings decorator * xfail all tests that depend on pint having a __array_function__ * treat nans as equal * implement tests for simple arithmetic operations * use param's id argument to assign readable names * add tests for sel() and isel() * add more readable names for the unary arithmetics * xfail every test that is not yet xfailing These don't pass because the constructor raises a unit stripped warning - fixed in pint#764. * only xfail if pint is not the current development version This is test is not really reliable, but sufficient for now. * always use lists instead of tuples for indexing * add tests for loc and squeeze * black * add names and xfail marks to the parameters * add tests for interp and interp_like * implement tests for reindex * remove the xfail marks where it is not clear yet that pint is to blame * add tests for reindex_like * don't pass the new DataArray to a kwarg * xfail if not on pint dev * refactor the tests * add tests for univariate and bivariate ufuncs * black * xfail aggregation only if pint does not implement __array_function__ yet * remove the global filterwarnings mark apparently, this caused the tests to change behavior, resulting in different errors, or causing tests to pass that should actually fail. * add a test case for the repr * create a pytest mark that explicitly requires pint's __array_function__ * also check the string representation in addition to the repr * add helpers for creating method tests * check that simple aggregation methods work * use format() instead of format strings * make sure the repr of method calls is different from functions * merge the two aggregation tests * explicitly check whether pint supports __array_function__ relying on versions is somewhat fragile. * provide a fallback for the new base quantity * check that no warning is raised for both with and without coords * also check that the repr works both with and without coords * wrap all aggregation function calls * xfail every call that fails because of something outside xarray * xfail tests related to dimension coordinates and indexes * use the dimensions from the original array * allow passing arguments to the method on call * add tests for comparisons * add tests for detecting, filling and dropping missing values * mark the missing value tests as requiring pint to support duck arrays * add tests for isin, where and interpolate_na * reformat unit ids and add a test parameter for compatible units * remove an unnecessary xfail * add tests for the top-level replication functions (*_like) * check for whatever pint does with *_like functions * add tests for combine_first * xfail the bivariate ufunc tests * xfail the call to np.median * move the top-level function tests out of the DataArray namespace class * add cumsum and cumprod to the list of aggregation functions * add tests for the numpy methods * check for equal units directly after checking the magnitude * add tests for content manipulation methods * add tests for comparing DataArrays (equals, indentical) * add a test for broadcast_equals * refactor the comparison operation tests * rewrite the strip, attach and assert_equal functions and add extract * preserve multiindex in strip and attach * attach the unit from element "data" as fallback * fix some small typos * compare QuantityScalar and QuantitySequence based on their values * make the isel test more robust * add tests for reshaping and reordering * unify the structure of the tests * mark the remaining tests as requiring a recent pint version, too * explicitly handle quantities as parameters * change the repr of the function / method wrappers * check whether __init__ and repr / str handle units in data and coords * generalize array_attach_units * move the redefinition of DimensionalityError * identify quantities using isinstance * typo * skip tests with a pint version without __array_function__ * compare DataArrays where possible * mark only the compatible unit as xfailing * preserve the name of data arrays * also attach units to x_mm * Test in more CI environments; documentation * What's New * remove a stale function * use Quantity directly for instance tests * explicitly set roll_coords to silence a deprecation warning * skip the whole module if pint does not implement __array_function__ the advantage is that now forgetting to decorate a test case is not possible. * allow to attach units using the mapping from extract_units * add tests for computation methods resampling fails until I figure out how to use it with non-datetime coords. * add tests for grouped operations * add a test for rolling_exp * add a todo note for the module level skip on __array_function__ * add a test for dot * use attach_units instead of manually attaching * modify the resample test to actually work * add a test for to_unstacked_dataset * update whats-new.rst and installing.rst * reformat the whats-new.rst entry * What's New commit 1f81338 Author: keewis <[email protected]> Date: Wed Oct 16 20:54:27 2019 +0200 Fixes to the resample docs (#3400) * add a missing newline to make sphinx detect the code block * update the link to the pandas documentation * explicitly state that this only works with datetime dimensions * also put the datetime dim requirement into the function description * add Series.resample and DataFrame.resample as reference * add the changes to whats-new.rst * move references to the bottom of the docstring commit 3f9069b Author: Joseph Hamman <[email protected]> Date: Mon Oct 14 14:22:08 2019 -0700 Revert to dev version commit 62943e2 Author: Joseph Hamman <[email protected]> Date: Mon Oct 14 12:51:46 2019 -0700 Release v0.14.0 commit 30472ec Author: Joseph Hamman <[email protected]> Date: Mon Oct 14 12:24:05 2019 -0700 updates for 0.14 release [black only] commit 4519843 Author: Joseph Hamman <[email protected]> Date: Mon Oct 14 12:18:54 2019 -0700 updates for 0.14 release commit 4f5ca73 Author: Deepak Cherian <[email protected]> Date: Mon Oct 14 18:06:53 2019 +0000 Make concat more forgiving with variables that are being merged. (#3364) * Make concat more forgiving with variables that are being merged. * rename test. * simplify test. * make diff smaller. commit ae1d8c7 Author: Crypto Jerônimo <[email protected]> Date: Sun Oct 13 15:38:36 2019 +0100 Fix documentation typos (#3396) commit 863e490 Author: Joe Hamman <[email protected]> Date: Sat Oct 12 17:33:33 2019 -0400 OrderedDict --> dict, some python3.5 cleanup too (#3389) * OrderedDict --> dict, some python3.5 cleanup too * respond to part of @shoyer's review * fix set attr syntax on netcdf4 vars * fix typing errors * update whats new and todo comments * Typing annotations * Typing annotations * Fix regression * More substantial changes * More polish * Typing annotations * Rerun notebooks commit 6851e3e Author: crusaderky <[email protected]> Date: Sat Oct 12 21:05:32 2019 +0100 Annotate LRUCache (#3395) commit 4c05d38 Author: Stephan Hoyer <[email protected]> Date: Fri Oct 11 08:47:57 2019 -0700 BUG: overrides to a dimension coordinate do not get aligned (#3393) Fixes GH3377 commit 3f29551 Author: Deepak Cherian <[email protected]> Date: Thu Oct 10 23:44:19 2019 +0000 map_blocks (#3276) * map_block attempt 2 * Address reviews: errors, args + kwargs support. * Works with datasets! * remove wrong comment. * Support chunks. * infer template. * cleanup * cleanup2 * api.rst * simple shape change error check. * Make test more complicated. * Fix for when user function doesn't set DataArray.name * Now _to_temp_dataset works. * Add whats-new * chunks kwarg makes no sense right now. * review feedback: 1. skip index graph nodes. 2. var → name 3. quicker dataarray creation. 4. Add restrictions to docstring. 5. rename chunk construction task. 6. error when non-xarray object is returned. 7. restore non-coord dims. review * Support nondim coords in make_meta. * Add Dataset.unify_chunks * doc updates. * minor. * update comment. * More complicated test dataset. Tests fail :X * Don't know why compute is needed. * work with DataArray nondim coords. * fastpath unify_chunks * comment. * much improved tests. * Change args, kwargs syntax. * Add dataset, dataarray methods. * api.rst * docstrings. * Fix unify_chunks. * Move assert_chunks_equal to xarray.testing. * minor changes. * Better error handling when inferring returned object * wip * wip * better to_array * Docstrings + nicer error message. * remove unify_chunks in map_blocks + better tests. * typing for unify_chunks * address more review comments. * more unify_chunks tests. * Just use dask.core.utils.meta_from_array * get tests working. assert_equal needs a lot of fixing. * more unify_chunks test. * assert_chunks_equal fixes. * copy over meta_from_array. * minor fixes. * raise chunks error earlier and test for map_blocks raising chunk error * fix. * Type annotations * py35 compat * make sure unify_chunks does not compute. * Make tests functional by call compute before assert_equal * Update whats-new * Work with attributes. * Support attrs and name changes. * more assert_equal * test changing coord attribute * fix whats new * rework tests to use fixtures (kind of) * more review changes. * cleanup * more review feedback. * fix unify_chunks. * read dask_array_compat :) * Dask 1.2.0 compat. * documentation polish * make_meta reflow * cosmetic * polish * Fix tests * isort * isort * Add func call to docstrings. commit 291cb80 Author: Deepak Cherian <[email protected]> Date: Thu Oct 10 18:23:20 2019 +0000 Add groupby.dims & Fix groupby reduce for DataArray (#3338) * Fix groupby reduce for DataArray * bugfix. * another bugfix. * bugfix unique_and_monotonic for object indexes (uniqueness is enough) * Add groupby.dims property. * update reduce docstring to point to xarray.ALL_DIMS * test for object index dims. * test reduce dimensions error. * Add whats-new * fix docs build * sq whats-new * one more test. * fix test. * undo monotonic change. * Add dimensions to repr. * Raise error if no bins. * Raise nice error if no groups were formed. * Some more error raising and testing. * Add dataset tests. * update whats-new. * fix tests. * make dims a cached lazy property. * fix whats-new. * whitespace * fix whats-new commit 3f0049f Author: crusaderky <[email protected]> Date: Wed Oct 9 19:01:29 2019 +0100 Speed up isel and __getitem__ (#3375) * Variable.isel cleanup/speedup * Dataset.isel code cleanup * Speed up isel * What's New * Better error checks * Speedup * type annotations * Update doc/whats-new.rst Co-Authored-By: Maximilian Roos <[email protected]> * What's New * What's New * Always shallow-copy variables commit 132733a Author: Deepak Cherian <[email protected]> Date: Tue Oct 8 22:13:47 2019 +0000 Fix concat bug when concatenating unlabeled dimensions. (#3362) * Fix concat bug when concatenating unlabeled dimensions. * Add whats-new * Add back older test. * fix test * Revert "fix test" This reverts commit c33ca34. * better fix commit 6fb272c Author: crusaderky <[email protected]> Date: Tue Oct 8 22:23:46 2019 +0100 Rolling minimum dependency versions policy (#3358) * - Downgrade numpy to 1.14, pandas to 0.20, scipy to 0.19 (24 months old) - Downgrade dask to 1.1 (6 months old) - Don't pin patch versions * Apply rolling policy (see #3222) * Automated tool to verify the minimum versions * Drop Python 3.5 * lint * Trivial cosmetic * Cosmetic * (temp) debug CI failure * Parallelize versions check script * Remove hacks for legacy dask * Documentation * Assorted cleanup * Assorted cleanup * Fix regression * Cleanup * type annotations upgraded to Python 3.6 * count_not_none backport * pd.Index.equals on legacy pandas returned False when comparing vs. a ndarray * Documentation * pathlib cleanup * Slide deprecations from 0.14 to 0.15 * More cleanups * More cleanups * Fix min_deps_check * Fix min_deps_check * Set policy of 12 months for pandas and scipy * Cleanup * Cleanup * Sphinx fix * Overhaul readthedocs environment * Fix test crash * Fix test crash * Prune readthedocs environment * Cleanup * Hack around versioneer bug on readthedocs CI * Code review * Prevent random timeouts in the readthedocs CI * What's New polish * Merge from Master * Trivial cosmetic * Reimplement pandas.core.common.count_not_none commit 3e2a754 Author: Alan D. Snow <[email protected]> Date: Tue Oct 8 09:36:52 2019 -0500 added geocube and rioxarray to related projects (#3383) commit 4254b4a Author: crusaderky <[email protected]> Date: Fri Oct 4 23:17:56 2019 +0100 Lint (#3373) * raise exception instance, not class * isort * isort * Bump mypy version commit 283b4fe Author: Deepak Cherian <[email protected]> Date: Fri Oct 4 17:04:36 2019 +0000 Docs/more fixes (#2934) * Move netcdf to beginning of io.rst * Better indexing example. * Start de-emphasizing pandas * misc. * compute, load, persist docstrings + text. * split-apply-combine. * np.newaxis. * misc. * some dask stuff. * Little more dask. * undo index.rst changes. * link to dask docs on chunks * Fix io.rst. * small changes. * rollingupdate. * joe's review commit dfdeef7 Author: Stephan Hoyer <[email protected]> Date: Thu Oct 3 21:42:50 2019 -0700 Explicitly keep track of indexes with merging (#3234) * Explicitly keep track of indexes in merge.py * Typing fixes * More tying fixes * more typing fixes * fixup commit 86fb71d Author: Deepak Cherian <[email protected]> Date: Thu Oct 3 15:41:50 2019 +0000 groupby repr (#3344) * groupby repr * add test. * test datetime and nondim coord * rename test_da → repr_da * Add whats-new * Update doc/whats-new.rst commit dd2b803 Author: Ryan May <[email protected]> Date: Wed Oct 2 15:43:44 2019 -0600 Remove setting of universal wheels (#3367) Universal wheels indicate that one wheel supports Python 2 and 3. This is no longer the case for xarray. This causes builds to generate files with names like xarray-0.13.0-py2.py3-none-any.whl, which can cause pip to incorrectly install the wheel when installing from a list of wheel files. commit 21705e6 Author: crusaderky <[email protected]> Date: Tue Oct 1 19:13:55 2019 +0100 Revisit # noqa annotations (#3359) commit fb575eb Author: crusaderky <[email protected]> Date: Tue Oct 1 15:11:21 2019 +0100 Fix codecov.io upload on Windows (#3360) commit 1ab2279 Author: Deepak Cherian <[email protected]> Date: Mon Sep 30 21:12:22 2019 +0000 Add how do I ... section (#3357) * Add how do I ... section * Bbugfix. * Update doc/howdoi.rst Co-Authored-By: Maximilian Roos <[email protected]> * Update doc/howdoi.rst Co-Authored-By: Maximilian Roos <[email protected]> * small updates. * Add more. commit bd1069b Author: Gregory Gundersen <[email protected]> Date: Sun Sep 29 19:39:53 2019 -0400 Add glossary to documentation (#3352) * First draft at terminology glossary. * Made name matching rules more explicit and hopefully clearer. * Amended what's new. * Changes based on feedback. * More changed based on feedback. commit b51683f Author: Anderson Banihirwe <[email protected]> Date: Sun Sep 29 07:50:21 2019 -0600 Documentation improvements (#3328) * Add examples for full_like, zeros_like, ones_like * Add examples for xr.align * Add examples for xr.merge * Update xr.where docstring * Update xr.dot docstring * Update xarray/core/common.py Co-Authored-By: Deepak Cherian <[email protected]> * Update xarray/core/common.py Co-Authored-By: Deepak Cherian <[email protected]> * Update xr.combine_by_coords docstring * Apply black formatting only * More black formatting * Remove unnecessary pandas bits * Fix indentation issues * Update assign and pipe * Update `Dataset.reindex` with examples * Update `Dataset.fillna` with examples * Address styling issues * Update docstring Co-Authored-By: Deepak Cherian <[email protected]> commit f3c7da6 Author: Gregory Gundersen <[email protected]> Date: Sat Sep 28 15:57:36 2019 -0400 Remove `complex.nc` from built docs (#3353) * Rolling back to prevent a different issue from leaking into this one. * Amended what's new. commit 6ece6a1 Author: Tony Tung <[email protected]> Date: Thu Sep 26 22:45:26 2019 -0700 Fix DataArray.to_netcdf type annotation (#3325) It calls DataSet.to_netcdf, which returns Union[bytes, "Delayed", None]. So this should as well. commit 16fdac9 Author: crusaderky <[email protected]> Date: Thu Sep 26 10:38:46 2019 +0100 CI test suites with pinned minimum dependencies (#3346) * CI test suites with pinned minimum dependencies * code review * Clarity re lxml commit ea101f5 Author: Tom Nicholas <[email protected]> Date: Thu Sep 26 10:51:58 2019 +0200 Bugfix/plot accept coord dim (#3345) Bug in plot.line fixed by ensuring 1D coords are cast down to their associated dims. There was previously two particular cases where this would not happen. commit 85c9a40 Author: crusaderky <[email protected]> Date: Wed Sep 25 02:40:54 2019 +0100 CI environments overhaul (#3340) * Rationalize and align CI environments. Add many optional dependencies to individual CI suites. * pynio and cdms2 are not available on Windows * cfgrib causes Python interpreter crash on Windows * dtype of np.arange defaults to int64 on Linux and int32 on Windows * Suppress failure to delete file on Windows * Mark hypotesis tests as @slow
1 parent a2d05fe commit 8167921

File tree

135 files changed

+6112
-2873
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

135 files changed

+6112
-2873
lines changed

.pre-commit-config.yaml

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ repos:
1111
hooks:
1212
- id: flake8
1313
- repo: https://github.com/pre-commit/mirrors-mypy
14-
rev: v0.720 # Must match ci/requirements/*.yml
14+
rev: v0.730 # Must match ci/requirements/*.yml
1515
hooks:
1616
- id: mypy
1717
# run these occasionally, ref discussion https://github.com/pydata/xarray/pull/3194

asv_bench/benchmarks/__init__.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -16,9 +16,9 @@ def decorator(func):
1616

1717
def requires_dask():
1818
try:
19-
import dask # noqa
19+
import dask # noqa: F401
2020
except ImportError:
21-
raise NotImplementedError
21+
raise NotImplementedError()
2222

2323

2424
def randn(shape, frac_nan=None, chunks=None, seed=0):

asv_bench/benchmarks/dataarray_missing.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
from . import randn, requires_dask
66

77
try:
8-
import dask # noqa
8+
import dask # noqa: F401
99
except ImportError:
1010
pass
1111

asv_bench/benchmarks/dataset_io.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -458,7 +458,7 @@ def setup(self):
458458
try:
459459
import distributed
460460
except ImportError:
461-
raise NotImplementedError
461+
raise NotImplementedError()
462462
self.client = distributed.Client()
463463
self.write = create_delayed_write()
464464

azure-pipelines.yml

+24-16
Original file line numberDiff line numberDiff line change
@@ -8,16 +8,20 @@ jobs:
88
- job: Linux
99
strategy:
1010
matrix:
11-
py35-min:
12-
conda_env: py35-min
11+
py36-bare-minimum:
12+
conda_env: py36-bare-minimum
13+
py36-min-all-deps:
14+
conda_env: py36-min-all-deps
15+
py36-min-nep18:
16+
conda_env: py36-min-nep18
1317
py36:
1418
conda_env: py36
1519
py37:
1620
conda_env: py37
1721
py37-upstream-dev:
1822
conda_env: py37
1923
upstream_dev: true
20-
py36-flakey:
24+
py36-flaky:
2125
conda_env: py36
2226
pytest_extra_flags: --run-flaky --run-network-tests
2327
allow_failure: true
@@ -78,27 +82,31 @@ jobs:
7882
mypy .
7983
displayName: mypy type checks
8084
81-
- job: Docs
85+
- job: MinimumVersionsPolicy
8286
pool:
8387
vmImage: 'ubuntu-16.04'
8488
steps:
85-
- template: ci/azure/install.yml
86-
parameters:
87-
env_file: doc/environment.yml
89+
- template: ci/azure/add-conda-to-path.yml
8890
- bash: |
89-
source activate xarray-tests
90-
cd doc
91-
sphinx-build -n -j auto -b html -d _build/doctrees . _build/html
92-
displayName: Build HTML docs
91+
conda install -y pyyaml
92+
python ci/min_deps_check.py ci/requirements/py36-bare-minimum.yml
93+
python ci/min_deps_check.py ci/requirements/py36-min-all-deps.yml
94+
displayName: minimum versions policy
9395
94-
- job: LinuxHypothesis
95-
variables:
96-
conda_env: py36-hypothesis
96+
- job: Docs
9797
pool:
9898
vmImage: 'ubuntu-16.04'
9999
steps:
100100
- template: ci/azure/install.yml
101+
parameters:
102+
env_file: ci/requirements/doc.yml
103+
- bash: |
104+
source activate xarray-tests
105+
# Replicate the exact environment created by the readthedocs CI
106+
conda install --yes --quiet -c pkgs/main mock pillow sphinx sphinx_rtd_theme
107+
displayName: Replicate readthedocs CI environment
101108
- bash: |
102109
source activate xarray-tests
103-
pytest properties
104-
displayName: Property based tests
110+
cd doc
111+
sphinx-build -n -j auto -b html -d _build/doctrees . _build/html
112+
displayName: Build HTML docs

ci/azure/unit-tests.yml

+3-2
Original file line numberDiff line numberDiff line change
@@ -11,15 +11,16 @@ steps:
1111
# https://github.com/microsoft/azure-pipelines-tasks/issues/9302
1212
- bash: |
1313
source activate xarray-tests
14-
pytest xarray \
14+
pytest \
1515
--junitxml=junit/test-results.xml \
1616
--cov=xarray \
1717
--cov-report=xml \
1818
$(pytest_extra_flags) || [ "$ALLOW_FAILURE" = "true" ]
1919
displayName: Run tests
2020

2121
- bash: |
22-
bash <(curl https://codecov.io/bash) -t 688f4d53-31bb-49b5-8370-4ce6f792cf3d
22+
curl https://codecov.io/bash > codecov.sh
23+
bash codecov.sh -t 688f4d53-31bb-49b5-8370-4ce6f792cf3d
2324
displayName: Upload coverage to codecov.io
2425

2526
# TODO: publish coverage results to Azure, once we can merge them across

ci/min_deps_check.py

+187
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,187 @@
1+
"""Fetch from conda database all available versions of the xarray dependencies and their
2+
publication date. Compare it against requirements/py36-min-all-deps.yml to verify the
3+
policy on obsolete dependencies is being followed. Print a pretty report :)
4+
"""
5+
import subprocess
6+
import sys
7+
from concurrent.futures import ThreadPoolExecutor
8+
from datetime import datetime, timedelta
9+
from typing import Dict, Iterator, Tuple
10+
11+
import yaml
12+
13+
IGNORE_DEPS = {
14+
"black",
15+
"coveralls",
16+
"flake8",
17+
"hypothesis",
18+
"mypy",
19+
"pip",
20+
"pytest",
21+
"pytest-cov",
22+
"pytest-env",
23+
}
24+
25+
POLICY_MONTHS = {"python": 42, "numpy": 24, "pandas": 12, "scipy": 12}
26+
POLICY_MONTHS_DEFAULT = 6
27+
28+
has_errors = False
29+
30+
31+
def error(msg: str) -> None:
32+
global has_errors
33+
has_errors = True
34+
print("ERROR:", msg)
35+
36+
37+
def parse_requirements(fname) -> Iterator[Tuple[str, int, int]]:
38+
"""Load requirements/py36-min-all-deps.yml
39+
40+
Yield (package name, major version, minor version)
41+
"""
42+
global has_errors
43+
44+
with open(fname) as fh:
45+
contents = yaml.safe_load(fh)
46+
for row in contents["dependencies"]:
47+
if isinstance(row, dict) and list(row) == ["pip"]:
48+
continue
49+
pkg, eq, version = row.partition("=")
50+
if pkg.rstrip("<>") in IGNORE_DEPS:
51+
continue
52+
if pkg.endswith("<") or pkg.endswith(">") or eq != "=":
53+
error("package should be pinned with exact version: " + row)
54+
continue
55+
try:
56+
major, minor = version.split(".")
57+
except ValueError:
58+
error("expected major.minor (without patch): " + row)
59+
continue
60+
try:
61+
yield pkg, int(major), int(minor)
62+
except ValueError:
63+
error("failed to parse version: " + row)
64+
65+
66+
def query_conda(pkg: str) -> Dict[Tuple[int, int], datetime]:
67+
"""Query the conda repository for a specific package
68+
69+
Return map of {(major version, minor version): publication date}
70+
"""
71+
stdout = subprocess.check_output(
72+
["conda", "search", pkg, "--info", "-c", "defaults", "-c", "conda-forge"]
73+
)
74+
out = {} # type: Dict[Tuple[int, int], datetime]
75+
major = None
76+
minor = None
77+
78+
for row in stdout.decode("utf-8").splitlines():
79+
label, _, value = row.partition(":")
80+
label = label.strip()
81+
if label == "file name":
82+
value = value.strip()[len(pkg) :]
83+
major, minor = value.split("-")[1].split(".")[:2]
84+
major = int(major)
85+
minor = int(minor)
86+
if label == "timestamp":
87+
assert major is not None
88+
assert minor is not None
89+
ts = datetime.strptime(value.split()[0].strip(), "%Y-%m-%d")
90+
91+
if (major, minor) in out:
92+
out[major, minor] = min(out[major, minor], ts)
93+
else:
94+
out[major, minor] = ts
95+
96+
# Hardcoded fix to work around incorrect dates in conda
97+
if pkg == "python":
98+
out.update(
99+
{
100+
(2, 7): datetime(2010, 6, 3),
101+
(3, 5): datetime(2015, 9, 13),
102+
(3, 6): datetime(2016, 12, 23),
103+
(3, 7): datetime(2018, 6, 27),
104+
(3, 8): datetime(2019, 10, 14),
105+
}
106+
)
107+
108+
return out
109+
110+
111+
def process_pkg(
112+
pkg: str, req_major: int, req_minor: int
113+
) -> Tuple[str, int, int, str, int, int, str, str]:
114+
"""Compare package version from requirements file to available versions in conda.
115+
Return row to build pandas dataframe:
116+
117+
- package name
118+
- major version in requirements file
119+
- minor version in requirements file
120+
- publication date of version in requirements file (YYYY-MM-DD)
121+
- major version suggested by policy
122+
- minor version suggested by policy
123+
- publication date of version suggested by policy (YYYY-MM-DD)
124+
- status ("<", "=", "> (!)")
125+
"""
126+
print("Analyzing %s..." % pkg)
127+
versions = query_conda(pkg)
128+
129+
try:
130+
req_published = versions[req_major, req_minor]
131+
except KeyError:
132+
error("not found in conda: " + pkg)
133+
return pkg, req_major, req_minor, "-", 0, 0, "-", "(!)"
134+
135+
policy_months = POLICY_MONTHS.get(pkg, POLICY_MONTHS_DEFAULT)
136+
policy_published = datetime.now() - timedelta(days=policy_months * 30)
137+
138+
policy_major = req_major
139+
policy_minor = req_minor
140+
policy_published_actual = req_published
141+
for (major, minor), published in reversed(sorted(versions.items())):
142+
if published < policy_published:
143+
break
144+
policy_major = major
145+
policy_minor = minor
146+
policy_published_actual = published
147+
148+
if (req_major, req_minor) < (policy_major, policy_minor):
149+
status = "<"
150+
elif (req_major, req_minor) > (policy_major, policy_minor):
151+
status = "> (!)"
152+
error("Package is too new: " + pkg)
153+
else:
154+
status = "="
155+
156+
return (
157+
pkg,
158+
req_major,
159+
req_minor,
160+
req_published.strftime("%Y-%m-%d"),
161+
policy_major,
162+
policy_minor,
163+
policy_published_actual.strftime("%Y-%m-%d"),
164+
status,
165+
)
166+
167+
168+
def main() -> None:
169+
fname = sys.argv[1]
170+
with ThreadPoolExecutor(8) as ex:
171+
futures = [
172+
ex.submit(process_pkg, pkg, major, minor)
173+
for pkg, major, minor in parse_requirements(fname)
174+
]
175+
rows = [f.result() for f in futures]
176+
177+
print("Package Required Policy Status")
178+
print("------------- ----------------- ----------------- ------")
179+
fmt = "{:13} {:>1d}.{:<2d} ({:10}) {:>1d}.{:<2d} ({:10}) {}"
180+
for row in rows:
181+
print(fmt.format(*row))
182+
183+
assert not has_errors
184+
185+
186+
if __name__ == "__main__":
187+
main()

ci/requirements/doc.yml

+21
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,21 @@
1+
name: xarray-docs
2+
channels:
3+
# Don't change to pkgs/main, as it causes random timeouts in readthedocs
4+
- conda-forge
5+
dependencies:
6+
- python=3.7
7+
- bottleneck
8+
- cartopy
9+
- h5netcdf
10+
- ipython
11+
- iris
12+
- netcdf4
13+
- numpy
14+
- numpydoc
15+
- pandas<0.25 # Hack around https://github.com/pydata/xarray/issues/3369
16+
- rasterio
17+
- seaborn
18+
- sphinx
19+
- sphinx-gallery
20+
- sphinx_rtd_theme
21+
- zarr

ci/requirements/py35-min.yml

-15
This file was deleted.

ci/requirements/py36-bare-minimum.yml

+11
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
name: xarray-tests
2+
channels:
3+
- conda-forge
4+
dependencies:
5+
- python=3.6
6+
- coveralls
7+
- pytest
8+
- pytest-cov
9+
- pytest-env
10+
- numpy=1.14
11+
- pandas=0.24

ci/requirements/py36-hypothesis.yml

-29
This file was deleted.

0 commit comments

Comments
 (0)