-
-
Notifications
You must be signed in to change notification settings - Fork 7.8k
Issues: vllm-project/vllm
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
[CPU] Fix torch version in x86 CPU backend and refine default configurations
ci/build
multi-modality
Related to multi-modality (#4194)
v1
#19258
opened Jun 6, 2025 by
bigPYJ1151
Loading…
2 of 3 tasks
[Usage]: Getting empty output for whsiperv3
multi-modality
Related to multi-modality (#4194)
usage
How to use vllm
#19183
opened Jun 5, 2025 by
Mrjaggu
1 task done
[Core] Batch multi modal input using pinned memory
multi-modality
Related to multi-modality (#4194)
ready
ONLY add when PR is ready to merge/full CI is needed
v1
#19169
opened Jun 4, 2025 by
lgeiger
Loading…
[CI] change spell checker from codespell to typos
documentation
Improvements or additions to documentation
frontend
multi-modality
Related to multi-modality (#4194)
ready
ONLY add when PR is ready to merge/full CI is needed
speculative-decoding
tool-calling
tpu
Related to Google TPUs
v1
#18711
opened May 26, 2025 by
andyxning
Loading…
[V1] Support Related to multi-modality (#4194)
v1
LLM.apply_model
frontend
multi-modality
#18465
opened May 21, 2025 by
DarkLight1337
Loading…
[Core] Parallel multi-modal processor
multi-modality
Related to multi-modality (#4194)
needs-rebase
v1
#17831
opened May 8, 2025 by
DarkLight1337
Loading…
[Misc] Refactor VLM common generation tests to support audio inputs and mix-modality tests
multi-modality
Related to multi-modality (#4194)
#17633
opened May 4, 2025 by
Isotr0py
Loading…
[Experiment] Parallel multi-modal processor
documentation
Improvements or additions to documentation
multi-modality
Related to multi-modality (#4194)
needs-rebase
v1
#17361
opened Apr 29, 2025 by
DarkLight1337
•
Draft
[VLM] Support HF format Phi-4-MM model
documentation
Improvements or additions to documentation
frontend
multi-modality
Related to multi-modality (#4194)
needs-rebase
#17121
opened Apr 24, 2025 by
Isotr0py
Loading…
[Model][Frontend] Adding timeseries modality support and Qwen2.5-ChatTS model support
ci/build
frontend
multi-modality
Related to multi-modality (#4194)
ready
ONLY add when PR is ready to merge/full CI is needed
#16852
opened Apr 18, 2025 by
chemeris
Loading…
[Misc] Raise ValueError for V1 during profiling when max_num_batched_tokens is too short
multi-modality
Related to multi-modality (#4194)
ready
ONLY add when PR is ready to merge/full CI is needed
v1
#16834
opened Apr 18, 2025 by
Isotr0py
Loading…
[V1] LogitsProcessor programming model
ci/build
documentation
Improvements or additions to documentation
frontend
multi-modality
Related to multi-modality (#4194)
needs-rebase
speculative-decoding
structured-output
tool-calling
v1
#16728
opened Apr 16, 2025 by
afeldman-nm
Loading…
[Metrics] Log multi-modal cache stats
ci/build
documentation
Improvements or additions to documentation
frontend
multi-modality
Related to multi-modality (#4194)
needs-rebase
ready
ONLY add when PR is ready to merge/full CI is needed
v1
#16478
opened Apr 11, 2025 by
DarkLight1337
Loading…
[V1][Perf] Avoid mem duplication when aggregating MM tensors
multi-modality
Related to multi-modality (#4194)
performance
Performance-related issues
v1
#16440
opened Apr 11, 2025 by
njhill
Loading…
[Feature]: Run performance benchmarks for multi-modal models in CI
feature request
New feature or request
help wanted
Extra attention is needed
multi-modality
Related to multi-modality (#4194)
#16353
opened Apr 9, 2025 by
DarkLight1337
1 task done
[Model][VLM] Add Qwen2.5-Omni model support (end-to-end full support)
ci/build
documentation
Improvements or additions to documentation
frontend
multi-modality
Related to multi-modality (#4194)
needs-rebase
[Usage]: How can I quickly obtain the number of prompt tokens containing multimodal data?
help wanted
Extra attention is needed
multi-modality
Related to multi-modality (#4194)
usage
How to use vllm
#16191
opened Apr 7, 2025 by
yansh97
1 task done
[Frontend]Reduce vLLM's import time
ci/build
frontend
multi-modality
Related to multi-modality (#4194)
needs-rebase
speculative-decoding
structured-output
v1
#15128
opened Mar 19, 2025 by
Chen-0210
Loading…
[RFC]: Schema for checking input shapes for multi-modal models
feature request
New feature or request
good first issue
Good for newcomers
multi-modality
Related to multi-modality (#4194)
RFC
#14764
opened Mar 13, 2025 by
DarkLight1337
1 task done
[Perf] Optimize Qwen2/2.5-VL ViT tensor generating performance
ci/build
multi-modality
Related to multi-modality (#4194)
v1
#14684
opened Mar 12, 2025 by
imkero
Loading…
[#14109][bug] Fix Ray placement group allocation is not respecting env VLLM_RAY_PER_WORKER_GPUS (fractional gpu)
ci/build
documentation
Improvements or additions to documentation
frontend
multi-modality
Related to multi-modality (#4194)
v1
#14521
opened Mar 9, 2025 by
pcpLiu
Loading…
[RFC]: Configurable multi-modal data for profiling
multi-modality
Related to multi-modality (#4194)
RFC
#14438
opened Mar 7, 2025 by
DarkLight1337
1 task done
[Core] Add DoRA Support
ci/build
documentation
Improvements or additions to documentation
frontend
multi-modality
Related to multi-modality (#4194)
needs-rebase
speculative-decoding
v1
#14389
opened Mar 7, 2025 by
ChloeL19
Loading…
[Hardware][CPU] Vllm int8 quantization enablement for ARM CPU
ci/build
documentation
Improvements or additions to documentation
frontend
multi-modality
Related to multi-modality (#4194)
speculative-decoding
v1
#14129
opened Mar 3, 2025 by
nishith-fujitsu
Loading…
Previous Next
ProTip!
Type g p on any issue or pull request to go back to the pull request listing page.