Skip to content

Issues: vllm-project/vllm

[RFC]: Deprecating vLLM V0
#18571 opened May 22, 2025 by WoosukKwon
Open 30
[Roadmap] vLLM Roadmap Q2 2025
#15735 opened Mar 29, 2025 by simon-mo
Open 15
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Assignee
Filter by who’s assigned
Assigned to nobody Loading
Sort

Issues list

[CPU] Fix torch version in x86 CPU backend and refine default configurations ci/build multi-modality Related to multi-modality (#4194) v1
#19258 opened Jun 6, 2025 by bigPYJ1151 Loading…
2 of 3 tasks
[Usage]: Getting empty output for whsiperv3 multi-modality Related to multi-modality (#4194) usage How to use vllm
#19183 opened Jun 5, 2025 by Mrjaggu
1 task done
[Core] Batch multi modal input using pinned memory multi-modality Related to multi-modality (#4194) ready ONLY add when PR is ready to merge/full CI is needed v1
#19169 opened Jun 4, 2025 by lgeiger Loading…
[CI] change spell checker from codespell to typos documentation Improvements or additions to documentation frontend multi-modality Related to multi-modality (#4194) ready ONLY add when PR is ready to merge/full CI is needed speculative-decoding tool-calling tpu Related to Google TPUs v1
#18711 opened May 26, 2025 by andyxning Loading…
[V1] Support LLM.apply_model frontend multi-modality Related to multi-modality (#4194) v1
#18465 opened May 21, 2025 by DarkLight1337 Loading…
[Core] Parallel multi-modal processor multi-modality Related to multi-modality (#4194) needs-rebase v1
#17831 opened May 8, 2025 by DarkLight1337 Loading…
[Experiment] Parallel multi-modal processor documentation Improvements or additions to documentation multi-modality Related to multi-modality (#4194) needs-rebase v1
#17361 opened Apr 29, 2025 by DarkLight1337 Draft
[VLM] Support HF format Phi-4-MM model documentation Improvements or additions to documentation frontend multi-modality Related to multi-modality (#4194) needs-rebase
#17121 opened Apr 24, 2025 by Isotr0py Loading…
[Model][Frontend] Adding timeseries modality support and Qwen2.5-ChatTS model support ci/build frontend multi-modality Related to multi-modality (#4194) ready ONLY add when PR is ready to merge/full CI is needed
#16852 opened Apr 18, 2025 by chemeris Loading…
[Misc] Raise ValueError for V1 during profiling when max_num_batched_tokens is too short multi-modality Related to multi-modality (#4194) ready ONLY add when PR is ready to merge/full CI is needed v1
#16834 opened Apr 18, 2025 by Isotr0py Loading…
[Metrics] Log multi-modal cache stats ci/build documentation Improvements or additions to documentation frontend multi-modality Related to multi-modality (#4194) needs-rebase ready ONLY add when PR is ready to merge/full CI is needed v1
#16478 opened Apr 11, 2025 by DarkLight1337 Loading…
[V1][Perf] Avoid mem duplication when aggregating MM tensors multi-modality Related to multi-modality (#4194) performance Performance-related issues v1
#16440 opened Apr 11, 2025 by njhill Loading…
[Feature]: Run performance benchmarks for multi-modal models in CI feature request New feature or request help wanted Extra attention is needed multi-modality Related to multi-modality (#4194)
#16353 opened Apr 9, 2025 by DarkLight1337
1 task done
[Model][VLM] Add Qwen2.5-Omni model support (end-to-end full support) ci/build documentation Improvements or additions to documentation frontend multi-modality Related to multi-modality (#4194) needs-rebase
#16347 opened Apr 9, 2025 by fyabc Draft
[Usage]: How can I quickly obtain the number of prompt tokens containing multimodal data? help wanted Extra attention is needed multi-modality Related to multi-modality (#4194) usage How to use vllm
#16191 opened Apr 7, 2025 by yansh97
1 task done
[Draft] Translation API frontend multi-modality Related to multi-modality (#4194)
#15910 opened Apr 1, 2025 by ywang96 Draft
[RFC]: Schema for checking input shapes for multi-modal models feature request New feature or request good first issue Good for newcomers multi-modality Related to multi-modality (#4194) RFC
#14764 opened Mar 13, 2025 by DarkLight1337
1 task done
[Perf] Optimize Qwen2/2.5-VL ViT tensor generating performance ci/build multi-modality Related to multi-modality (#4194) v1
#14684 opened Mar 12, 2025 by imkero Loading…
[RFC]: Configurable multi-modal data for profiling multi-modality Related to multi-modality (#4194) RFC
#14438 opened Mar 7, 2025 by DarkLight1337
1 task done
[Core] Add DoRA Support ci/build documentation Improvements or additions to documentation frontend multi-modality Related to multi-modality (#4194) needs-rebase speculative-decoding v1
#14389 opened Mar 7, 2025 by ChloeL19 Loading…
[Hardware][CPU] Vllm int8 quantization enablement for ARM CPU ci/build documentation Improvements or additions to documentation frontend multi-modality Related to multi-modality (#4194) speculative-decoding v1
#14129 opened Mar 3, 2025 by nishith-fujitsu Loading…
ProTip! Type g p on any issue or pull request to go back to the pull request listing page.