Skip to content

Commit dbb79f0

Browse files
committed
Fix typos across the code, doc and comments
1 parent 25debff commit dbb79f0

24 files changed

+36
-36
lines changed

design_notes/flexible_indexes_notes.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -71,7 +71,7 @@ An `XarrayIndex` subclass must/should/may implement the following properties/met
7171
- a `data` property to access index's data and map it to coordinate data (see [Section 4](#4-indexvariable))
7272
- a `__getitem__()` implementation to propagate the index through DataArray/Dataset indexing operations
7373
- `equals()`, `union()` and `intersection()` methods for data alignment (see [Section 2.6](#26-using-indexes-for-data-alignment))
74-
- Xarray coordinate getters (see [Section 2.2.4](#224-implicit-coodinates))
74+
- Xarray coordinate getters (see [Section 2.2.4](#224-implicit-coordinates))
7575
- a method that may return a new index and that will be called when one of the corresponding coordinates is dropped from the Dataset/DataArray (multi-coordinate indexes)
7676
- `encode()`/`decode()` methods that would allow storage-agnostic serialization and fast-path reconstruction of the underlying index object(s) (see [Section 2.8](#28-index-encoding))
7777
- one or more "non-standard" methods or properties that could be leveraged in Xarray 3rd-party extensions like Dataset/DataArray accessors (see [Section 2.7](#27-using-indexes-for-other-purposes))

design_notes/grouper_objects.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -166,7 +166,7 @@ where `|` represents chunk boundaries. A simple rechunking to
166166
```
167167
000|111122|3333
168168
```
169-
would make this resampling reduction an embarassingly parallel blockwise problem.
169+
would make this resampling reduction an embarrassingly parallel blockwise problem.
170170

171171
Similarly consider monthly-mean climatologies for which the month numbers might be
172172
```

design_notes/named_array_design_doc.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -258,7 +258,7 @@ Questions:
258258
Variable.coarsen_reshape
259259
Variable.rolling_window
260260

261-
Variable.set_dims # split this into broadcas_to and expand_dims
261+
Variable.set_dims # split this into broadcast_to and expand_dims
262262

263263

264264
# Reordering/Reshaping

doc/user-guide/dask.rst

+2-2
Original file line numberDiff line numberDiff line change
@@ -298,7 +298,7 @@ Automatic parallelization with ``apply_ufunc`` and ``map_blocks``
298298

299299
.. tip::
300300

301-
Some problems can become embarassingly parallel and thus easy to parallelize
301+
Some problems can become embarrassingly parallel and thus easy to parallelize
302302
automatically by rechunking to a frequency, e.g. ``ds.chunk(time=TimeResampler("YE"))``.
303303
See :py:meth:`Dataset.chunk` for more.
304304

@@ -559,7 +559,7 @@ larger chunksizes.
559559

560560
.. tip::
561561

562-
Many time domain problems become amenable to an embarassingly parallel or blockwise solution
562+
Many time domain problems become amenable to an embarrassingly parallel or blockwise solution
563563
(e.g. using :py:func:`xarray.map_blocks`, :py:func:`dask.array.map_blocks`, or
564564
:py:func:`dask.array.blockwise`) by rechunking to a frequency along the time dimension.
565565
Provide :py:class:`xarray.groupers.TimeResampler` objects to :py:meth:`Dataset.chunk` to do so.

doc/user-guide/data-structures.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -289,7 +289,7 @@ pressure that were made under various conditions:
289289
* the measurements were made on four different days;
290290
* they were made at two separate locations, which we will represent using
291291
their latitude and longitude; and
292-
* they were made using instruments by three different manufacutrers, which we
292+
* they were made using instruments by three different manufacturers, which we
293293
will refer to as `'manufac1'`, `'manufac2'`, and `'manufac3'`.
294294

295295
.. ipython:: python

doc/user-guide/pandas.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,7 @@ Particularly after a roundtrip, the following deviations are noted:
120120

121121
- a non-dimension Dataset ``coordinate`` is converted into ``variable``
122122
- a non-dimension DataArray ``coordinate`` is not converted
123-
- ``dtype`` is not allways the same (e.g. "str" is converted to "object")
123+
- ``dtype`` is not always the same (e.g. "str" is converted to "object")
124124
- ``attrs`` metadata is not conserved
125125

126126
To avoid these problems, the third-party `ntv-pandas <https://github.com/loco-philippe/ntv-pandas>`__ library offers lossless and reversible conversions between

doc/whats-new.rst

+3-3
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ New Features
118118
(:issue:`6610`, :pull:`8840`).
119119
By `Deepak Cherian <https://github.com/dcherian>`_.
120120
- Allow rechunking to a frequency using ``Dataset.chunk(time=TimeResampler("YE"))`` syntax. (:issue:`7559`, :pull:`9109`)
121-
Such rechunking allows many time domain analyses to be executed in an embarassingly parallel fashion.
121+
Such rechunking allows many time domain analyses to be executed in an embarrassingly parallel fashion.
122122
By `Deepak Cherian <https://github.com/dcherian>`_.
123123
- Allow per-variable specification of ```mask_and_scale``, ``decode_times``, ``decode_timedelta``
124124
``use_cftime`` and ``concat_characters`` params in :py:func:`~xarray.open_dataset` (:pull:`9218`).
@@ -151,7 +151,7 @@ Breaking changes
151151

152152
Bug fixes
153153
~~~~~~~~~
154-
- Fix scatter plot broadcasting unneccesarily. (:issue:`9129`, :pull:`9206`)
154+
- Fix scatter plot broadcasting unnecessarily. (:issue:`9129`, :pull:`9206`)
155155
By `Jimmy Westling <https://github.com/illviljan>`_.
156156
- Don't convert custom indexes to ``pandas`` indexes when computing a diff (:pull:`9157`)
157157
By `Justus Magin <https://github.com/keewis>`_.
@@ -614,7 +614,7 @@ Internal Changes
614614
~~~~~~~~~~~~~~~~
615615

616616
- The implementation of :py:func:`map_blocks` has changed to minimize graph size and duplication of data.
617-
This should be a strict improvement even though the graphs are not always embarassingly parallel any more.
617+
This should be a strict improvement even though the graphs are not always embarrassingly parallel any more.
618618
Please open an issue if you spot a regression. (:pull:`8412`, :issue:`8409`).
619619
By `Deepak Cherian <https://github.com/dcherian>`_.
620620
- Remove null values before plotting. (:pull:`8535`).

xarray/coding/cftime_offsets.py

+3-3
Original file line numberDiff line numberDiff line change
@@ -739,7 +739,7 @@ def _generate_anchored_deprecated_frequencies(
739739
return pairs
740740

741741

742-
_DEPRECATED_FREQUENICES: dict[str, str] = {
742+
_DEPRECATED_FREQUENCIES: dict[str, str] = {
743743
"A": "YE",
744744
"Y": "YE",
745745
"AS": "YS",
@@ -765,7 +765,7 @@ def _generate_anchored_deprecated_frequencies(
765765

766766

767767
def _emit_freq_deprecation_warning(deprecated_freq):
768-
recommended_freq = _DEPRECATED_FREQUENICES[deprecated_freq]
768+
recommended_freq = _DEPRECATED_FREQUENCIES[deprecated_freq]
769769
message = _DEPRECATION_MESSAGE.format(
770770
deprecated_freq=deprecated_freq, recommended_freq=recommended_freq
771771
)
@@ -784,7 +784,7 @@ def to_offset(freq: BaseCFTimeOffset | str, warn: bool = True) -> BaseCFTimeOffs
784784
freq_data = match.groupdict()
785785

786786
freq = freq_data["freq"]
787-
if warn and freq in _DEPRECATED_FREQUENICES:
787+
if warn and freq in _DEPRECATED_FREQUENCIES:
788788
_emit_freq_deprecation_warning(freq)
789789
multiples = freq_data["multiple"]
790790
multiples = 1 if multiples is None else int(multiples)

xarray/core/dataset.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -9749,7 +9749,7 @@ def eval(
97499749
Calculate an expression supplied as a string in the context of the dataset.
97509750
97519751
This is currently experimental; the API may change particularly around
9752-
assignments, which currently returnn a ``Dataset`` with the additional variable.
9752+
assignments, which currently return a ``Dataset`` with the additional variable.
97539753
Currently only the ``python`` engine is supported, which has the same
97549754
performance as executing in python.
97559755

xarray/core/datatree.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1520,7 +1520,7 @@ def to_netcdf(
15201520
mode : {"w", "a"}, default: "w"
15211521
Write ('w') or append ('a') mode. If mode='w', any existing file at
15221522
this location will be overwritten. If mode='a', existing variables
1523-
will be overwritten. Only appies to the root group.
1523+
will be overwritten. Only applies to the root group.
15241524
encoding : dict, optional
15251525
Nested dictionary with variable names as keys and dictionaries of
15261526
variable specific encodings as values, e.g.,

xarray/core/datatree_ops.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -224,7 +224,7 @@ def insert_doc_addendum(docstring: str | None, addendum: str) -> str | None:
224224
Dataset directly as well as the mixins: DataWithCoords, DatasetAggregations, and DatasetOpsMixin.
225225
226226
The majority of the docstrings fall into a parseable pattern. Those that
227-
don't, just have the addendum appeneded after. None values are returned.
227+
don't, just have the addendum appended after. None values are returned.
228228
229229
"""
230230
if docstring is None:

xarray/core/indexes.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1802,7 +1802,7 @@ def check_variables():
18021802

18031803
def _apply_indexes_fast(indexes: Indexes[Index], args: Mapping[Any, Any], func: str):
18041804
# This function avoids the call to indexes.group_by_index
1805-
# which is really slow when repeatidly iterating through
1805+
# which is really slow when repeatedly iterating through
18061806
# an array. However, it fails to return the correct ID for
18071807
# multi-index arrays
18081808
indexes_fast, coords = indexes._indexes, indexes._variables

xarray/core/merge.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -267,7 +267,7 @@ def merge_collected(
267267
index, other_index, variable, other_var, index_cmp_cache
268268
):
269269
raise MergeError(
270-
f"conflicting values/indexes on objects to be combined fo coordinate {name!r}\n"
270+
f"conflicting values/indexes on objects to be combined for coordinate {name!r}\n"
271271
f"first index: {index!r}\nsecond index: {other_index!r}\n"
272272
f"first variable: {variable!r}\nsecond variable: {other_var!r}\n"
273273
)

xarray/core/variable.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1658,7 +1658,7 @@ def reduce( # type: ignore[override]
16581658
_get_keep_attrs(default=False) if keep_attrs is None else keep_attrs
16591659
)
16601660

1661-
# Noe that the call order for Variable.mean is
1661+
# Note that the call order for Variable.mean is
16621662
# Variable.mean -> NamedArray.mean -> Variable.reduce
16631663
# -> NamedArray.reduce
16641664
result = super().reduce(

xarray/datatree_/docs/source/data-structures.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -40,7 +40,7 @@ stored under hashable keys), and so has the same key properties:
4040
- ``dims``: a dictionary mapping of dimension names to lengths, for the variables in this node,
4141
- ``data_vars``: a dict-like container of DataArrays corresponding to variables in this node,
4242
- ``coords``: another dict-like container of DataArrays, corresponding to coordinate variables in this node,
43-
- ``attrs``: dict to hold arbitary metadata relevant to data in this node.
43+
- ``attrs``: dict to hold arbitrary metadata relevant to data in this node.
4444

4545
A single ``DataTree`` object acts much like a single ``Dataset`` object, and has a similar set of dict-like methods
4646
defined upon it. However, ``DataTree``'s can also contain other ``DataTree`` objects, so they can be thought of as nested dict-like

xarray/datatree_/docs/source/hierarchical-data.rst

+1-1
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ We can add Herbert to the family tree without displacing Homer by :py:meth:`~Dat
133133
134134
.. note::
135135
This example shows a minor subtlety - the returned tree has Homer's brother listed as ``"Herbert"``,
136-
but the original node was named "Herbert". Not only are names overriden when stored as keys like this,
136+
but the original node was named "Herbert". Not only are names overridden when stored as keys like this,
137137
but the new node is a copy, so that the original node that was reference is unchanged (i.e. ``herbert.name == "Herb"`` still).
138138
In other words, nodes are copied into trees, not inserted into them.
139139
This is intentional, and mirrors the behaviour when storing named ``xarray.DataArray`` objects inside datasets.

xarray/plot/dataset_plot.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -737,7 +737,7 @@ def _temp_dataarray(ds: Dataset, y: Hashable, locals_: dict[str, Any]) -> DataAr
737737
coords[key] = darray
738738
dims.update(darray.dims)
739739

740-
# Trim dataset from unneccessary dims:
740+
# Trim dataset from unnecessary dims:
741741
ds_trimmed = ds.drop_dims(ds.sizes.keys() - dims) # TODO: Use ds.dims in the future
742742

743743
# The dataarray has to include all the dims. Broadcast to that shape

xarray/plot/utils.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1170,7 +1170,7 @@ def _legend_add_subtitle(handles, labels, text):
11701170

11711171
if text and len(handles) > 1:
11721172
# Create a blank handle that's not visible, the
1173-
# invisibillity will be used to discern which are subtitles
1173+
# invisibility will be used to discern which are subtitles
11741174
# or not:
11751175
blank_handle = plt.Line2D([], [], label=text)
11761176
blank_handle.set_visible(False)

xarray/tests/test_backends.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -5043,7 +5043,7 @@ def test_extract_nc4_variable_encoding_netcdf4(self):
50435043

50445044
def test_extract_h5nc_encoding(self) -> None:
50455045
# not supported with h5netcdf (yet)
5046-
var = xr.Variable(("x",), [1, 2, 3], {}, {"least_sigificant_digit": 2})
5046+
var = xr.Variable(("x",), [1, 2, 3], {}, {"least_significant_digit": 2})
50475047
with pytest.raises(ValueError, match=r"unexpected encoding"):
50485048
_extract_nc4_variable_encoding(var, raise_on_invalid=True)
50495049

xarray/tests/test_dask.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -1797,6 +1797,6 @@ def test_minimize_graph_size():
17971797
actual = len([key for key in graph if var in key[0]])
17981798
# assert that we only include each chunk of an index variable
17991799
# is only included once, not the product of number of chunks of
1800-
# all the other dimenions.
1800+
# all the other dimensions.
18011801
# e.g. previously for 'x', actual == numchunks['y'] * numchunks['z']
18021802
assert actual == numchunks[var], (actual, numchunks[var])

xarray/tests/test_dataarray.py

+4-4
Original file line numberDiff line numberDiff line change
@@ -6650,8 +6650,8 @@ def test_to_and_from_iris(self) -> None:
66506650
),
66516651
)
66526652

6653-
for coord, orginal_key in zip((actual.coords()), original.coords):
6654-
original_coord = original.coords[orginal_key]
6653+
for coord, original_key in zip((actual.coords()), original.coords):
6654+
original_coord = original.coords[original_key]
66556655
assert coord.var_name == original_coord.name
66566656
assert_array_equal(
66576657
coord.points, CFDatetimeCoder().encode(original_coord.variable).values
@@ -6726,8 +6726,8 @@ def test_to_and_from_iris_dask(self) -> None:
67266726
),
67276727
)
67286728

6729-
for coord, orginal_key in zip((actual.coords()), original.coords):
6730-
original_coord = original.coords[orginal_key]
6729+
for coord, original_key in zip((actual.coords()), original.coords):
6730+
original_coord = original.coords[original_key]
67316731
assert coord.var_name == original_coord.name
67326732
assert_array_equal(
67336733
coord.points, CFDatetimeCoder().encode(original_coord.variable).values

xarray/tests/test_dataset.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -6742,7 +6742,7 @@ def test_pad(self, padded_dim_name, constant_values) -> None:
67426742
else:
67436743
np.testing.assert_equal(padded.sizes[ds_dim_name], ds_dim)
67446744

6745-
# check if coord "numbers" with dimention dim3 is paded correctly
6745+
# check if coord "numbers" with dimension dim3 is padded correctly
67466746
if padded_dim_name == "dim3":
67476747
assert padded["numbers"][[0, -1]].isnull().all()
67486748
# twarning: passes but dtype changes from int to float

xarray/tests/test_plot.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -2004,7 +2004,7 @@ def test_plot_rgba_image_transposed(self) -> None:
20042004
easy_array((4, 10, 15), start=0), dims=["band", "y", "x"]
20052005
).plot.imshow()
20062006

2007-
def test_warns_ambigious_dim(self) -> None:
2007+
def test_warns_ambiguous_dim(self) -> None:
20082008
arr = DataArray(easy_array((3, 3, 3)), dims=["y", "x", "band"])
20092009
with pytest.warns(UserWarning):
20102010
arr.plot.imshow()

xarray/tests/test_variable.py

+5-5
Original file line numberDiff line numberDiff line change
@@ -576,7 +576,7 @@ def test_copy_deep_recursive(self) -> None:
576576
# lets just ensure that deep copy works without RecursionError
577577
v.copy(deep=True)
578578

579-
# indirect recusrion
579+
# indirect recursion
580580
v2 = self.cls("y", [2, 3])
581581
v.attrs["other"] = v2
582582
v2.attrs["other"] = v
@@ -654,7 +654,7 @@ def test_aggregate_complex(self):
654654
expected = Variable((), 0.5 + 1j)
655655
assert_allclose(v.mean(), expected)
656656

657-
def test_pandas_cateogrical_dtype(self):
657+
def test_pandas_categorical_dtype(self):
658658
data = pd.Categorical(np.arange(10, dtype="int64"))
659659
v = self.cls("x", data)
660660
print(v) # should not error
@@ -1575,13 +1575,13 @@ def test_transpose_0d(self):
15751575
actual = variable.transpose()
15761576
assert_identical(actual, variable)
15771577

1578-
def test_pandas_cateogrical_dtype(self):
1578+
def test_pandas_categorical_dtype(self):
15791579
data = pd.Categorical(np.arange(10, dtype="int64"))
15801580
v = self.cls("x", data)
15811581
print(v) # should not error
15821582
assert pd.api.types.is_extension_array_dtype(v.dtype)
15831583

1584-
def test_pandas_cateogrical_no_chunk(self):
1584+
def test_pandas_categorical_no_chunk(self):
15851585
data = pd.Categorical(np.arange(10, dtype="int64"))
15861586
v = self.cls("x", data)
15871587
with pytest.raises(
@@ -2386,7 +2386,7 @@ def test_multiindex(self):
23862386
def test_pad(self, mode, xr_arg, np_arg):
23872387
super().test_pad(mode, xr_arg, np_arg)
23882388

2389-
def test_pandas_cateogrical_dtype(self):
2389+
def test_pandas_categorical_dtype(self):
23902390
data = pd.Categorical(np.arange(10, dtype="int64"))
23912391
with pytest.raises(ValueError, match="was found to be a Pandas ExtensionArray"):
23922392
self.cls("x", data)

0 commit comments

Comments
 (0)