Closed
Description
on master:
>>> from pandas.core.base import FrozenNDArray
>>> a = FrozenNDArray(1)
>>> a
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.4/site-packages/pandas-0.15.1_192_g7bd1b24-py3.4-linux-x86_64.egg/pandas/core/base.py", line 65, in __repr__
return str(self)
File "/usr/lib/python3.4/site-packages/pandas-0.15.1_192_g7bd1b24-py3.4-linux-x86_64.egg/pandas/core/base.py", line 44, in __str__
return self.__unicode__()
File "/usr/lib/python3.4/site-packages/pandas-0.15.1_192_g7bd1b24-py3.4-linux-x86_64.egg/pandas/core/base.py", line 268, in __unicode__
...
...
RuntimeError: maximum recursion depth exceeded while calling a Python object
keeps going back and forth between __str__
and __unicode__
. a workaround would be to change self
on this line to self if self.ndim else self.item()
.
I ran into this bug while taking .max
on the array:
>>> xs
FrozenNDArray([1, 2, 3], dtype='int64')
>>> xs.max()
RuntimeError: maximum recursion depth exceeded while calling a Python object
it is kind of unusual to me that here .max
returns a zero dimensional array, whereas numpy returns a scalar:
>>> xs
FrozenNDArray([1, 2, 3], dtype='int64')
>>> type(xs.max())
<class 'pandas.core.base.FrozenNDArray'>
>>> type(np.array([1, 2, 3]).max())
<class 'numpy.int64'>
in particular, np.isscalar
returns differently for the two:
>>> np.isscalar(np.array([1, 2, 3]).max())
True
>>> np.isscalar(FrozenNDArray([1, 2, 3]).max())
False