Skip to content

[Performance]: MultiModalKwargs serialization has significant impact on E2E latency (w/ proof-of-concept patch) #16461

Open
@xtknight

Description

@xtknight

Proposal to improve performance

Through testing, it seems that the serialization of MultiModalKwargs is having a significant impact on multimodal inference performance. This is due to nested torch.tensor objects in the hierarchy of the dictionary of MultiModalKwargs. Although there is special treatment for pure torch.tensor objects in custom_enc_hook, this does not extend to objects that nest torch.tensor, thus the reason for the slowdown.

I have not yet been able to compare the performance of the other patch ( #16432 ) regarding MultiModalKwargs memory usage due to compiling issues, but will provide figures later if possible. Also as this focuses primarily on E2E latency and/or throughput I have decided to post this as a separate issue.

I have attached a diff (as well as the entire patched serial_utils.py) that converts the tensors in MultiModalKwargs (including ones inside 'field') to numpy or pure python integers. I have seen roughly 10x+ improvement in pickle speed by doing so. This patch is probably very incomplete mostly due to my lack of knowledge of vLLM internals and thus I would like this to serve merely as a proof of concept in helping debug the issue.

As a frame of reference, using gemma-3 12b on 2*A100 I'm seeing these improvements in E2E inference latency on a single request with a basic prompt to classify flower images from the 102 Category Flower Dataset:

Image Count vLLM v0.8.3 vLLM v0.8.3+patch
1 1.211s 1.339s
10 9.676s 4.058s
20 18.652s 5.655s
30 27.539s 6.931s
40 35.828s 8.038s
50 45.106s 9.115s
100 4m29.967s 24.337s

I've been only able to do basic verification that the serialization is done properly on one image, but it seems that in principle this speed-up should be possible as it is well-known numpy tensors are pickled faster than torch tensors. Would be nice to have another set of eyes verify that things are working properly and help turn this into a more "official" patch. I will update later comparing memory usage as well if I have time.

The base for the patch is release v0.8.3.

(vllm/v1/serial_utils.py)

serial_utils_v0.8.3_mmpatch.txt

serial_utils_patch_v0.8.3.py.txt

Report of performance regression

No response

Misc discussion on performance

No response

Your current environment (if you think it is necessary)

The output of `python collect_env.py`

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.

Metadata

Metadata

Assignees

No one assigned

    Labels

    performancePerformance-related issues

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions