Skip to content

Add documentation section on Scaling #28315

Closed
@TomAugspurger

Description

@TomAugspurger

From the user survey, the most critical feature request was "Improve scaling to larger datasets".

While we continue to do work within pandas to improve scaling (fewer copies, native string dtype, etc.), we can document a few strategies that are available that may help with scaling.

  1. Using efficient dtypes / ensure you don't have object dtypes. Possibly use Categorical for strings, if they have low cardinality. Possibly use lower-precision dtypes.
  2. Avoid unnecessary work. When loading data, use columns= (csv or parquet) to select only the columns you need. Probably some other examples.
  3. Use out of core methods, like pd.read_csv(..., chunksize=), to avoid having to rewrite.
  4. Use other libraries. I would of course recommend Dask. But I'm not opposed to a section highlighting Vaex, and possibly Spark (though installing it in our doc environment may be difficult).

Do people have thoughts on this? Any objections to highlighting outside projects like Dask?
Are there other strategies we should mention?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions