Description
Code Sample 1
import numpy as np
import pandas as pd
from pandas._libs import lib as lib
print(lib.maybe_convert_objects(np.array([np.nan, True], dtype='object'), safe=1))
Code Sample 2
import numpy as np
import pandas as pd
from pandas._libs import lib as lib
print(lib.maybe_convert_objects(np.array([np.nan, np.datetime64()], dtype='object'), safe=1, convert_datetime=1))
Code Sample 3
import math
import pandas as pd
z1 = pd.Series([True, False, pd.NA]).value_counts(dropna=False)
print(z1)
z2 = pd.Series({True: 1, False: 1, math.nan:1})
print(z1)
Problem description
This problem was originally detected by @benfuja when he noticed inconsistent output in the value_counts
function when dropna=False
for otherwise Boolean arrays containing a pd.NA
indicator. Code Sample 3 is Ben's original example showing the odd behavior (note that z1 is printed twice, and depending on your malloc library, will likely change values between calls). When we tried to isolate what was happening, we ended up down a couple of rabbit holes leading to the maybe_convert_objects
function.
What I suspect is happening is that when maybe_convert_objects
is called by the Series index formatter, it is expecting that if a mixture of Booleans and np.nan
are present, that the presence of np.nan
will be preserved. Instead, I think what happens is that maybe_convert_objects
erroneously returns garbage values when either Booleans or datetimes/timedeltas are present along with np.nan
by returning a corresponding array with uninitialized empty memory in the locations where the np.nan
values were present. Code Samples 1 and 2 are examples of this.
As noted in #27417, there are probably a few opportunities to improve this function and its dependencies. I'm not particularly familiar with this code base, but it seems to me that the most correct minimal fix may be in the Seen class:
Proposed Minimal Fix
@property
def numeric_(self):
- return self.complex_ or self.float_ or self.int_
+ return self.complex_ or self.float_ or self.int_ or self.null_ or self.nan_
Rationale
The maybe_convert_objects
method relies on the Seen
class to tell it if it is safe to consider an array as containing only values of a specific type. When maybe_convert_objects
encounters a NaN
or a None
, it stores the encountered value in the floats
and complexes
arrays and sets the seen.nan_
/ seen.null_
flags. Later checks for whether an array is of a specific type will miss that these values were encountered if these flags are not checked for. This seemed like the most logical place to put these checks in Seen
, since both self.null_
and self.nan_
are stored in the floats/complexes
arrays earlier. Another option would be to fix the logic in maybe_convert_objects
to more carefully check the self.nan_/seen.null_
flags in the case statements.
Output of pd.show_versions()
pandas : 1.0.1
numpy : 1.17.3
pytz : 2019.3
dateutil : 2.8.1
pip : 19.3.1
setuptools : 45.0.0.post20200113
Cython : None
pytest : None
hypothesis : None
sphinx : None
blosc : None
feather : None
xlsxwriter : None
lxml.etree : None
html5lib : None
pymysql : None
psycopg2 : None
jinja2 : 2.10.3
IPython : 7.11.1
pandas_datareader: None
bs4 : None
bottleneck : None
fastparquet : None
gcsfs : None
lxml.etree : None
matplotlib : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
pyarrow : None
pytables : None
pytest : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlwt : None
xlsxwriter : None
numba : None