Skip to content

[Performance]: Why/How vLLM uses CPU memory? #16947

Open
@khayamgondal

Description

@khayamgondal

Proposal to improve performance

I am running LLama 70b FP8, the entire model and inference run fit on GPU but I still see around 100GB of CPU RAM usage. Why does vLLM use CPU memory even when inference is running on GPU only?

Report of performance regression

No response

Misc discussion on performance

No response

Your current environment (if you think it is necessary)

The output of `python collect_env.py`

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    performancePerformance-related issues

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions