Skip to content

CLN: Fix some spelling #36644

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Sep 26, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion doc/source/development/policies.rst
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ deprecations, API compatibility, and version numbering.

A pandas release number is made up of ``MAJOR.MINOR.PATCH``.

API breaking changes should only occur in **major** releases. Theses changes
API breaking changes should only occur in **major** releases. These changes
will be documented, with clear guidance on what is changing, why it's changing,
and how to migrate existing code to the new behavior.

Expand Down
6 changes: 3 additions & 3 deletions doc/source/user_guide/timeseries.rst
Original file line number Diff line number Diff line change
Expand Up @@ -282,20 +282,20 @@ You can pass only the columns that you need to assemble.
Invalid data
~~~~~~~~~~~~

The default behavior, ``errors='raise'``, is to raise when unparseable:
The default behavior, ``errors='raise'``, is to raise when unparsable:

.. code-block:: ipython

In [2]: pd.to_datetime(['2009/07/31', 'asd'], errors='raise')
ValueError: Unknown string format

Pass ``errors='ignore'`` to return the original input when unparseable:
Pass ``errors='ignore'`` to return the original input when unparsable:

.. ipython:: python

pd.to_datetime(['2009/07/31', 'asd'], errors='ignore')

Pass ``errors='coerce'`` to convert unparseable data to ``NaT`` (not a time):
Pass ``errors='coerce'`` to convert unparsable data to ``NaT`` (not a time):

.. ipython:: python

Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.17.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ Highlights include:
- Plotting methods are now available as attributes of the ``.plot`` accessor, see :ref:`here <whatsnew_0170.plot>`
- The sorting API has been revamped to remove some long-time inconsistencies, see :ref:`here <whatsnew_0170.api_breaking.sorting>`
- Support for a ``datetime64[ns]`` with timezones as a first-class dtype, see :ref:`here <whatsnew_0170.tz>`
- The default for ``to_datetime`` will now be to ``raise`` when presented with unparseable formats,
- The default for ``to_datetime`` will now be to ``raise`` when presented with unparsable formats,
previously this would return the original input. Also, date parse
functions now return consistent results. See :ref:`here <whatsnew_0170.api_breaking.to_datetime>`
- The default for ``dropna`` in ``HDFStore`` has changed to ``False``, to store by default all rows even
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v0.20.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -1201,7 +1201,7 @@ Modules privacy has changed

Some formerly public python/c/c++/cython extension modules have been moved and/or renamed. These are all removed from the public API.
Furthermore, the ``pandas.core``, ``pandas.compat``, and ``pandas.util`` top-level modules are now considered to be PRIVATE.
If indicated, a deprecation warning will be issued if you reference theses modules. (:issue:`12588`)
If indicated, a deprecation warning will be issued if you reference these modules. (:issue:`12588`)

.. csv-table::
:header: "Previous Location", "New Location", "Deprecated"
Expand Down
4 changes: 2 additions & 2 deletions doc/source/whatsnew/v0.23.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -998,7 +998,7 @@ Datetimelike API changes
- Addition and subtraction of ``NaN`` from a :class:`Series` with ``dtype='timedelta64[ns]'`` will raise a ``TypeError`` instead of treating the ``NaN`` as ``NaT`` (:issue:`19274`)
- ``NaT`` division with :class:`datetime.timedelta` will now return ``NaN`` instead of raising (:issue:`17876`)
- Operations between a :class:`Series` with dtype ``dtype='datetime64[ns]'`` and a :class:`PeriodIndex` will correctly raises ``TypeError`` (:issue:`18850`)
- Subtraction of :class:`Series` with timezone-aware ``dtype='datetime64[ns]'`` with mis-matched timezones will raise ``TypeError`` instead of ``ValueError`` (:issue:`18817`)
- Subtraction of :class:`Series` with timezone-aware ``dtype='datetime64[ns]'`` with mismatched timezones will raise ``TypeError`` instead of ``ValueError`` (:issue:`18817`)
- :class:`Timestamp` will no longer silently ignore unused or invalid ``tz`` or ``tzinfo`` keyword arguments (:issue:`17690`)
- :class:`Timestamp` will no longer silently ignore invalid ``freq`` arguments (:issue:`5168`)
- :class:`CacheableOffset` and :class:`WeekDay` are no longer available in the ``pandas.tseries.offsets`` module (:issue:`17830`)
Expand Down Expand Up @@ -1273,7 +1273,7 @@ Timedelta
- Bug in :func:`Period.asfreq` where periods near ``datetime(1, 1, 1)`` could be converted incorrectly (:issue:`19643`, :issue:`19834`)
- Bug in :func:`Timedelta.total_seconds()` causing precision errors, for example ``Timedelta('30S').total_seconds()==30.000000000000004`` (:issue:`19458`)
- Bug in :func:`Timedelta.__rmod__` where operating with a ``numpy.timedelta64`` returned a ``timedelta64`` object instead of a ``Timedelta`` (:issue:`19820`)
- Multiplication of :class:`TimedeltaIndex` by ``TimedeltaIndex`` will now raise ``TypeError`` instead of raising ``ValueError`` in cases of length mis-match (:issue:`19333`)
- Multiplication of :class:`TimedeltaIndex` by ``TimedeltaIndex`` will now raise ``TypeError`` instead of raising ``ValueError`` in cases of length mismatch (:issue:`19333`)
- Bug in indexing a :class:`TimedeltaIndex` with a ``np.timedelta64`` object which was raising a ``TypeError`` (:issue:`20393`)


Expand Down
6 changes: 3 additions & 3 deletions doc/source/whatsnew/v0.24.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -419,7 +419,7 @@ Other enhancements
- :meth:`Index.difference`, :meth:`Index.intersection`, :meth:`Index.union`, and :meth:`Index.symmetric_difference` now have an optional ``sort`` parameter to control whether the results should be sorted if possible (:issue:`17839`, :issue:`24471`)
- :meth:`read_excel()` now accepts ``usecols`` as a list of column names or callable (:issue:`18273`)
- :meth:`MultiIndex.to_flat_index` has been added to flatten multiple levels into a single-level :class:`Index` object.
- :meth:`DataFrame.to_stata` and :class:`pandas.io.stata.StataWriter117` can write mixed sting columns to Stata strl format (:issue:`23633`)
- :meth:`DataFrame.to_stata` and :class:`pandas.io.stata.StataWriter117` can write mixed string columns to Stata strl format (:issue:`23633`)
- :meth:`DataFrame.between_time` and :meth:`DataFrame.at_time` have gained the ``axis`` parameter (:issue:`8839`)
- :meth:`DataFrame.to_records` now accepts ``index_dtypes`` and ``column_dtypes`` parameters to allow different data types in stored column and index records (:issue:`18146`)
- :class:`IntervalIndex` has gained the :attr:`~IntervalIndex.is_overlapping` attribute to indicate if the ``IntervalIndex`` contains any overlapping intervals (:issue:`23309`)
Expand Down Expand Up @@ -510,7 +510,7 @@ even when ``'\n'`` was passed in ``line_terminator``.

*New behavior* on Windows:

Passing ``line_terminator`` explicitly, set thes ``line terminator`` to that character.
Passing ``line_terminator`` explicitly, set the ``line terminator`` to that character.

.. code-block:: ipython

Expand Down Expand Up @@ -1885,7 +1885,7 @@ Reshaping
- :meth:`DataFrame.nlargest` and :meth:`DataFrame.nsmallest` now returns the correct n values when keep != 'all' also when tied on the first columns (:issue:`22752`)
- Constructing a DataFrame with an index argument that wasn't already an instance of :class:`~pandas.core.Index` was broken (:issue:`22227`).
- Bug in :class:`DataFrame` prevented list subclasses to be used to construction (:issue:`21226`)
- Bug in :func:`DataFrame.unstack` and :func:`DataFrame.pivot_table` returning a missleading error message when the resulting DataFrame has more elements than int32 can handle. Now, the error message is improved, pointing towards the actual problem (:issue:`20601`)
- Bug in :func:`DataFrame.unstack` and :func:`DataFrame.pivot_table` returning a misleading error message when the resulting DataFrame has more elements than int32 can handle. Now, the error message is improved, pointing towards the actual problem (:issue:`20601`)
- Bug in :func:`DataFrame.unstack` where a ``ValueError`` was raised when unstacking timezone aware values (:issue:`18338`)
- Bug in :func:`DataFrame.stack` where timezone aware values were converted to timezone naive values (:issue:`19420`)
- Bug in :func:`merge_asof` where a ``TypeError`` was raised when ``by_col`` were timezone aware values (:issue:`21184`)
Expand Down
2 changes: 1 addition & 1 deletion doc/source/whatsnew/v1.2.0.rst
Original file line number Diff line number Diff line change
Expand Up @@ -325,7 +325,7 @@ I/O
- :meth:`to_csv` did not support zip compression for binary file object not having a filename (:issue:`35058`)
- :meth:`to_csv` and :meth:`read_csv` did not honor ``compression`` and ``encoding`` for path-like objects that are internally converted to file-like objects (:issue:`35677`, :issue:`26124`, and :issue:`32392`)
- :meth:`to_picke` and :meth:`read_pickle` did not support compression for file-objects (:issue:`26237`, :issue:`29054`, and :issue:`29570`)
- Bug in :func:`LongTableBuilder.middle_separator` was duplicating LaTeX longtable entires in the List of Tables of a LaTeX document (:issue:`34360`)
- Bug in :func:`LongTableBuilder.middle_separator` was duplicating LaTeX longtable entries in the List of Tables of a LaTeX document (:issue:`34360`)
- Bug in :meth:`read_csv` with ``engine='python'`` truncating data if multiple items present in first row and first element started with BOM (:issue:`36343`)
- Removed ``private_key`` and ``verbose`` from :func:`read_gbq` as they are no longer supported in ``pandas-gbq`` (:issue:`34654`, :issue:`30200`)

Expand Down
2 changes: 1 addition & 1 deletion pandas/core/aggregation.py
Original file line number Diff line number Diff line change
Expand Up @@ -377,7 +377,7 @@ def validate_func_kwargs(
(['one', 'two'], ['min', 'max'])
"""
no_arg_message = "Must provide 'func' or named aggregation **kwargs."
tuple_given_message = "func is expected but recieved {} in **kwargs."
tuple_given_message = "func is expected but received {} in **kwargs."
columns = list(kwargs)
func = []
for col_func in kwargs.values():
Expand Down
4 changes: 2 additions & 2 deletions pandas/core/arrays/datetimelike.py
Original file line number Diff line number Diff line change
Expand Up @@ -168,7 +168,7 @@ def _unbox_scalar(self, value: DTScalarOrNaT, setitem: bool = False) -> int:
value : Period, Timestamp, Timedelta, or NaT
Depending on subclass.
setitem : bool, default False
Whether to check compatiblity with setitem strictness.
Whether to check compatibility with setitem strictness.

Returns
-------
Expand Down Expand Up @@ -1123,7 +1123,7 @@ def _sub_period(self, other):
raise TypeError(f"cannot subtract Period from a {type(self).__name__}")

def _add_period(self, other: Period):
# Overriden by TimedeltaArray
# Overridden by TimedeltaArray
raise TypeError(f"cannot add Period to a {type(self).__name__}")

def _add_offset(self, offset):
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/arrays/sparse/array.py
Original file line number Diff line number Diff line change
Expand Up @@ -986,7 +986,7 @@ def _concat_same_type(cls, to_concat):
# get an identical index as concating the values and then
# creating a new index. We don't want to spend the time trying
# to merge blocks across arrays in `to_concat`, so the resulting
# BlockIndex may have more blocs.
# BlockIndex may have more blocks.
blengths = []
blocs = []

Expand Down
2 changes: 1 addition & 1 deletion pandas/core/frame.py
Original file line number Diff line number Diff line change
Expand Up @@ -5133,7 +5133,7 @@ def drop_duplicates(
0 Yum Yum cup 4.0
2 Indomie cup 3.5

To remove duplicates and keep last occurences, use ``keep``.
To remove duplicates and keep last occurrences, use ``keep``.

>>> df.drop_duplicates(subset=['brand', 'style'], keep='last')
brand style rating
Expand Down
10 changes: 5 additions & 5 deletions pandas/core/generic.py
Original file line number Diff line number Diff line change
Expand Up @@ -3406,7 +3406,7 @@ def _maybe_update_cacher(
if cacher is not None:
ref = cacher[1]()

# we are trying to reference a dead referant, hence
# we are trying to reference a dead referent, hence
# a copy
if ref is None:
del self._cacher
Expand All @@ -3420,7 +3420,7 @@ def _maybe_update_cacher(
ref._item_cache.pop(cacher[0], None)

if verify_is_copy:
self._check_setitem_copy(stacklevel=5, t="referant")
self._check_setitem_copy(stacklevel=5, t="referent")

if clear:
self._clear_item_cache()
Expand Down Expand Up @@ -3781,10 +3781,10 @@ def _check_is_chained_assignment_possible(self) -> bool_t:
if self._is_view and self._is_cached:
ref = self._get_cacher()
if ref is not None and ref._is_mixed_type:
self._check_setitem_copy(stacklevel=4, t="referant", force=True)
self._check_setitem_copy(stacklevel=4, t="referent", force=True)
return True
elif self._is_copy:
self._check_setitem_copy(stacklevel=4, t="referant")
self._check_setitem_copy(stacklevel=4, t="referent")
return False

def _check_setitem_copy(self, stacklevel=4, t="setting", force=False):
Expand Down Expand Up @@ -3837,7 +3837,7 @@ def _check_setitem_copy(self, stacklevel=4, t="setting", force=False):
if isinstance(self._is_copy, str):
t = self._is_copy

elif t == "referant":
elif t == "referent":
t = (
"\n"
"A value is trying to be set on a copy of a slice from a "
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/groupby/generic.py
Original file line number Diff line number Diff line change
Expand Up @@ -1430,7 +1430,7 @@ def _choose_path(self, fast_path: Callable, slow_path: Callable, group: DataFram
except AssertionError:
raise
except Exception:
# GH#29631 For user-defined function, we cant predict what may be
# GH#29631 For user-defined function, we can't predict what may be
# raised; see test_transform.test_transform_fastpath_raises
return path, res

Expand Down
2 changes: 1 addition & 1 deletion pandas/core/indexers.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,7 +144,7 @@ def check_setitem_lengths(indexer, value, values) -> bool:
no_op = False

if isinstance(indexer, (np.ndarray, list)):
# We can ignore other listlikes becasue they are either
# We can ignore other listlikes because they are either
# a) not necessarily 1-D indexers, e.g. tuple
# b) boolean indexers e.g. BoolArray
if is_list_like(value):
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/indexes/base.py
Original file line number Diff line number Diff line change
Expand Up @@ -4503,7 +4503,7 @@ def sort_values(
idx = ensure_key_mapped(self, key)

# GH 35584. Sort missing values according to na_position kwarg
# ignore na_position for MutiIndex
# ignore na_position for MultiIndex
if not isinstance(
self, (ABCMultiIndex, ABCDatetimeIndex, ABCTimedeltaIndex, ABCPeriodIndex)
):
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/indexes/interval.py
Original file line number Diff line number Diff line change
Expand Up @@ -1023,7 +1023,7 @@ def intersection(
def _intersection_unique(self, other: "IntervalIndex") -> "IntervalIndex":
"""
Used when the IntervalIndex does not have any common endpoint,
no mater left or right.
no matter left or right.
Return the intersection with another IntervalIndex.

Parameters
Expand Down
2 changes: 1 addition & 1 deletion pandas/core/internals/managers.py
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ def set_axis(self, axis: int, new_labels: Index) -> None:

@property
def _is_single_block(self) -> bool:
# Assumes we are 2D; overriden by SingleBlockManager
# Assumes we are 2D; overridden by SingleBlockManager
return len(self.blocks) == 1

def _rebuild_blknos_and_blklocs(self) -> None:
Expand Down
4 changes: 2 additions & 2 deletions pandas/core/strings.py
Original file line number Diff line number Diff line change
Expand Up @@ -1470,7 +1470,7 @@ def str_pad(arr, width, side="left", fillchar=" "):
character. Equivalent to ``Series.str.pad(side='left')``.
Series.str.ljust : Fills the right side of strings with an arbitrary
character. Equivalent to ``Series.str.pad(side='right')``.
Series.str.center : Fills boths sides of strings with an arbitrary
Series.str.center : Fills both sides of strings with an arbitrary
character. Equivalent to ``Series.str.pad(side='both')``.
Series.str.zfill : Pad strings in the Series/Index by prepending '0'
character. Equivalent to ``Series.str.pad(side='left', fillchar='0')``.
Expand Down Expand Up @@ -2918,7 +2918,7 @@ def zfill(self, width):
character.
Series.str.pad : Fills the specified sides of strings with an arbitrary
character.
Series.str.center : Fills boths sides of strings with an arbitrary
Series.str.center : Fills both sides of strings with an arbitrary
character.

Notes
Expand Down
2 changes: 1 addition & 1 deletion pandas/errors/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ class DuplicateLabelError(ValueError):

class InvalidIndexError(Exception):
"""
Exception raised when attemping to use an invalid index key.
Exception raised when attempting to use an invalid index key.

.. versionadded:: 1.1.0
"""
2 changes: 1 addition & 1 deletion pandas/io/parsers.py
Original file line number Diff line number Diff line change
Expand Up @@ -227,7 +227,7 @@
result 'foo'

If a column or index cannot be represented as an array of datetimes,
say because of an unparseable value or a mixture of timezones, the column
say because of an unparsable value or a mixture of timezones, the column
or index will be returned unaltered as an object data type. For
non-standard datetime parsing, use ``pd.to_datetime`` after
``pd.read_csv``. To parse an index or column with a mixture of timezones,
Expand Down
2 changes: 1 addition & 1 deletion pandas/io/stata.py
Original file line number Diff line number Diff line change
Expand Up @@ -499,7 +499,7 @@ class CategoricalConversionWarning(Warning):
dataset with an iterator results in categorical variable with different
categories. This occurs since it is not possible to know all possible values
until the entire dataset has been read. To avoid this warning, you can either
read dataset without an interator, or manually convert categorical data by
read dataset without an iterator, or manually convert categorical data by
``convert_categoricals`` to False and then accessing the variable labels
through the value_labels method of the reader.
"""
Expand Down
2 changes: 1 addition & 1 deletion pandas/plotting/_misc.py
Original file line number Diff line number Diff line change
Expand Up @@ -318,7 +318,7 @@ def bootstrap_plot(series, fig=None, size=50, samples=500, **kwds):

Examples
--------
This example draws a basic bootstap plot for a Series.
This example draws a basic bootstrap plot for a Series.

.. plot::
:context: close-figs
Expand Down
2 changes: 1 addition & 1 deletion pandas/tests/groupby/aggregate/test_aggregate.py
Original file line number Diff line number Diff line change
Expand Up @@ -564,7 +564,7 @@ def test_mangled(self):
def test_named_agg_nametuple(self, inp):
# GH34422
s = pd.Series([1, 1, 2, 2, 3, 3, 4, 5])
msg = f"func is expected but recieved {type(inp).__name__}"
msg = f"func is expected but received {type(inp).__name__}"
with pytest.raises(TypeError, match=msg):
s.groupby(s.values).agg(a=inp)

Expand Down