Skip to content

BUG: Poor GroupBy Performance with ArrowDtype(...) wrapped types #60861

Open
@kzvezdarov

Description

@kzvezdarov

Pandas version checks

  • I have checked that this issue has not already been reported.

  • I have confirmed this bug exists on the latest version of pandas.

  • I have confirmed this bug exists on the main branch of pandas.

Reproducible Example

import pandas as pd

df = pd.DataFrame({"key": range(100000), "val": "test"})
%timeit df.groupby(["key"]).first();

pa_df = df.convert_dtypes(dtype_backend="pyarrow")
%timeit pa_df.groupby(["key"]).first();

pa_df = pa_df.astype({"val": pd.StringDtype("pyarrow")})
%timeit pa_df.groupby(["key"]).first();

Issue Description

Grouping by and then aggregating on a dataframe that contains ArrowDtype(pyarrow.string()) columns is orders of magnitude slower than performing the same operations on an equivalent dataframe whose corresponding string column is of any other acceptable string type (e.g. string, StringDtype("python"), StringDtype("pyarrow")). This is surprising in particular because StringDtype("pyarrow") does not exhibit the same problem.

Note that in the bug reproduction example, DataFrame.convert_dtypes with dtype_backend="pyarrow" converts string columns to ArrowDtype(pyarrow.string()) rather than StringDtype("pyarrow").

Finally, here's a sample run, with dtypes printed out for clarity; I've reproduced this on both OS X and OpenSuse Tumbleweed for the listed pandas and pyarrow versions (as well as current main):

In [7]: import pandas as pd

In [8]: df = pd.DataFrame({"key": range(100000), "val": "test"})

In [9]: df["val"].dtype
Out[9]: dtype('O')

In [10]: %timeit df.groupby(["key"]).first();
8.37 ms ± 599 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)

In [11]: pa_df = df.convert_dtypes(dtype_backend="pyarrow")

In [13]: type(pa_df["val"].dtype)
Out[13]: pandas.core.dtypes.dtypes.ArrowDtype

In [14]: %timeit pa_df.groupby(["key"]).first();
2.39 s ± 142 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

In [15]: pa_df = pa_df.astype({"val": pd.StringDtype("pyarrow")})
    ...:

In [16]: type(pa_df["val"].dtype)
Out[16]: pandas.core.arrays.string_.StringDtype

In [17]: %timeit pa_df.groupby(["key"]).first();
12.9 ms ± 306 μs per loop (mean ± std. dev. of 7 runs, 100 loops each)

Expected Behavior

Aggregation performance on ArrowDtype(pyarrow.string()) columns should be comparable to aggregation performance on StringDtype("pyarrow"), string typed columns.

Installed Versions

INSTALLED VERSIONS

commit : 0691c5c
python : 3.13.1
python-bits : 64
OS : Darwin
OS-release : 24.3.0
Version : Darwin Kernel Version 24.3.0: Thu Jan 2 20:24:16 PST 2025; root:xnu-11215.81.4~3/RELEASE_ARM64_T6000
machine : arm64
processor : arm
byteorder : little
LC_ALL : en_CA.UTF-8
LANG : None
LOCALE : en_CA.UTF-8

pandas : 2.2.3
numpy : 2.2.2
pytz : 2025.1
dateutil : 2.9.0.post0
pip : 24.3.1
Cython : None
sphinx : None
IPython : 8.32.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : None
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : 2025.2.0
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.5
lxml.etree : None
matplotlib : None
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : 19.0.0
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : None
sqlalchemy : None
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2025.1
qtpy : None
pyqt5 : None

Metadata

Metadata

Assignees

No one assigned

    Labels

    Arrowpyarrow functionalityBugDtype ConversionsUnexpected or buggy dtype conversionsNeeds DiscussionRequires discussion from core team before further action

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions