-
-
Notifications
You must be signed in to change notification settings - Fork 18.5k
DOC: Cleaned references to pandas <v0.12 in docs #17375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
d566383
6abd5cc
5bc8714
7b9bc62
7d18e06
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -73,7 +73,7 @@ index is passed, one will be created having values ``[0, ..., len(data) - 1]``. | |
|
||
.. note:: | ||
|
||
Starting in v0.8.0, pandas supports non-unique index values. If an operation | ||
pandas supports non-unique index values. If an operation | ||
that does not support duplicate index values is attempted, an exception | ||
will be raised at that time. The reason for being lazy is nearly all performance-based | ||
(there are many instances in computations, like parts of GroupBy, where the index | ||
|
@@ -698,7 +698,7 @@ DataFrame in tabular form, though it won't always fit the console width: | |
|
||
print(baseball.iloc[-20:, :12].to_string()) | ||
|
||
New since 0.10.0, wide DataFrames will now be printed across multiple rows by | ||
Note that wide DataFrames will be printed across multiple rows by | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. remove |
||
default: | ||
|
||
.. ipython:: python | ||
|
@@ -856,8 +856,7 @@ DataFrame objects with mixed-type columns, all of the data will get upcasted to | |
From DataFrame using ``to_panel`` method | ||
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | ||
|
||
This method was introduced in v0.7 to replace ``LongPanel.to_long``, and converts | ||
a DataFrame with a two-level index to a Panel. | ||
``to_panel`` converts a DataFrame with a two-level index to a Panel. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. can you add a referencde to the section where panel is deprecated. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. There is a deprecation warning a bit above, so it's too much adding it here also IMO. I changed a note that calls on people to contribute to panels, though, as that isnt relevant anymore. |
||
|
||
.. ipython:: python | ||
:okwarning: | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -140,7 +140,7 @@ columns: | |
|
||
In [5]: grouped = df.groupby(get_letter_type, axis=1) | ||
|
||
Starting with 0.8, pandas Index objects now support duplicate values. If a | ||
Note that pandas Index objects support duplicate values. If a | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. remove the Note that |
||
non-unique index is used as the group key in a groupby operation, all values | ||
for the same index value will be considered to be in one group and thus the | ||
output of aggregation functions will only contain unique index values: | ||
|
@@ -288,8 +288,6 @@ chosen level: | |
|
||
s.sum(level='second') | ||
|
||
.. versionadded:: 0.6 | ||
|
||
Grouping with multiple levels is supported. | ||
|
||
.. ipython:: python | ||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -364,7 +364,7 @@ warn_bad_lines : boolean, default ``True`` | |
Specifying column data types | ||
'''''''''''''''''''''''''''' | ||
|
||
Starting with v0.10, you can indicate the data type for the whole DataFrame or | ||
You can indicate the data type for the whole DataFrame or | ||
individual columns: | ||
|
||
.. ipython:: python | ||
|
@@ -3346,7 +3346,7 @@ Read/Write API | |
'''''''''''''' | ||
|
||
``HDFStore`` supports an top-level API using ``read_hdf`` for reading and ``to_hdf`` for writing, | ||
similar to how ``read_csv`` and ``to_csv`` work. (new in 0.11.0) | ||
similar to how ``read_csv`` and ``to_csv`` work. | ||
|
||
.. ipython:: python | ||
|
||
|
@@ -3791,7 +3791,7 @@ indexed dimension as the ``where``. | |
|
||
.. note:: | ||
|
||
Indexes are automagically created (starting ``0.10.1``) on the indexables | ||
Indexes are automagically created on the indexables | ||
and any data columns you specify. This behavior can be turned off by passing | ||
``index=False`` to ``append``. | ||
|
||
|
@@ -3878,7 +3878,7 @@ create a new table!) | |
Iterator | ||
++++++++ | ||
|
||
Starting in ``0.11.0``, you can pass, ``iterator=True`` or ``chunksize=number_in_a_chunk`` | ||
Note that you can pass ``iterator=True`` or ``chunksize=number_in_a_chunk`` | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. remove Note that |
||
to ``select`` and ``select_as_multiple`` to return an iterator on the results. | ||
The default is 50,000 rows returned in a chunk. | ||
|
||
|
@@ -3986,8 +3986,8 @@ of rows in an object. | |
Multiple Table Queries | ||
++++++++++++++++++++++ | ||
|
||
New in 0.10.1 are the methods ``append_to_multiple`` and | ||
``select_as_multiple``, that can perform appending/selecting from | ||
The methods ``append_to_multiple`` and | ||
``select_as_multiple`` can perform appending/selecting from | ||
multiple tables at once. The idea is to have one table (call it the | ||
selector table) that you index most/all of the columns, and perform your | ||
queries. The other table(s) are data tables with an index matching the | ||
|
@@ -4291,7 +4291,7 @@ Pass ``min_itemsize`` on the first table creation to a-priori specify the minimu | |
``min_itemsize`` can be an integer, or a dict mapping a column name to an integer. You can pass ``values`` as a key to | ||
allow all *indexables* or *data_columns* to have this min_itemsize. | ||
|
||
Starting in 0.11.0, passing a ``min_itemsize`` dict will cause all passed columns to be created as *data_columns* automatically. | ||
Passing a ``min_itemsize`` dict will cause all passed columns to be created as *data_columns* automatically. | ||
|
||
.. note:: | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -67,9 +67,8 @@ arise and we wish to also consider that "missing" or "not available" or "NA". | |
|
||
.. note:: | ||
|
||
Prior to version v0.10.0 ``inf`` and ``-inf`` were also | ||
considered to be "NA" in computations. This is no longer the case by | ||
default; use the ``mode.use_inf_as_na`` option to recover it. | ||
If you want to consider ``inf`` and ``-inf`` | ||
to be "NA" in computations, you can use the ``mode.use_inf_as_na`` option to archieve it. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. achieve |
||
|
||
.. _missing.isna: | ||
|
||
|
@@ -485,8 +484,8 @@ respectively: | |
|
||
Replacing Generic Values | ||
~~~~~~~~~~~~~~~~~~~~~~~~ | ||
Often times we want to replace arbitrary values with other values. New in v0.8 | ||
is the ``replace`` method in Series/DataFrame that provides an efficient yet | ||
Often times we want to replace arbitrary values with other values. The | ||
``replace`` method in Series/DataFrame provides an efficient yet | ||
flexible way to perform such replacements. | ||
|
||
For a Series, you can replace a single value or a list of values by another | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
remove the 'Note that'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use double-backtics on all of the
eq
etc (better readability)