Skip to content

[Performance]: vLLM v0.6.3 to v0.7.1, improved tps by around 1.6x #18339

Open
@wonjerry

Description

@wonjerry

Proposal to improve performance

Hi, I recently upgraded from vLLM v0.6.3 to v0.7.1, and I noticed that my RPS (requests per second) improved by around 1.6x — even though I’m still using the v0 engine, not v1.

I saw in the v0.7.1 release notes that it mentions “~3x the generation throughput, ~10x the memory capacity for tokens, and horizontal context scalability with pipeline parallelism.” Could you help me understand what specific optimizations or changes in v0.7.1 might have contributed to this performance gain, even without switching to the v1 engine?

Thanks in advance!

Report of performance regression

No response

Misc discussion on performance

No response

Your current environment (if you think it is necessary)

The output of `python collect_env.py`

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    performancePerformance-related issues

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions