Skip to content

[Quantization] Add compressed-tensors NVFP4 support #18312

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 7 commits into
base: main
Choose a base branch
from

Conversation

dsikka
Copy link
Contributor

@dsikka dsikka commented May 17, 2025

Summary

  • Add cutlass support
  • If the current platform does not support the cutlass kernel, use nvfp4 emulations
  • Question: What should be our minimum required platform, if we're still supporting emulations?

Note: Will need to update config validation + testing models after the next compressed-tensors release as ct 0.9.4 does not have the required compressor or new strategy that is required for nvfp4 activations

Generations:

import torch

from vllm import LLM, SamplingParams

prompts = [
    "The Swiss Alps are", 
    "Brad Marchand is",
    "The Toronto Maple Leafs are"
]

# Create a sampling params object for greedy sampling
sampling_params = SamplingParams(temperature=0.80, top_p=0.95, max_tokens=40, min_tokens=10)
llm = LLM("nm-testing/TinyLlama-1.1B-Chat-v1.0-NVFP4A4")

Output:

known for their breathtaking beauty and vast expanse. As you drive through the alpine landscape, you might notice towering peaks, deep valleys, and colorful hues of


49 years old. Sky Sports News has also reported that Brad Marchand has signed a two-year deal with the Toronto Maple Leafs worth $6.5 million.


back at home after a two-week road trip, and they'll have to make a major change if they're going to compete for the Stanley Cup. Len Kasik/The

Testing

  • Add temporary testing model; model's config will need to be updated with the correct strategy after the next ct release

Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

@dsikka dsikka marked this pull request as ready for review May 18, 2025 00:06
@dsikka
Copy link
Contributor Author

dsikka commented May 18, 2025

@robertgshaw2-redhat @mgoin can I get a ready

is_float_type = (weight_quant.type == QuantizationType.FLOAT
and input_quant.type == QuantizationType.FLOAT.value)
is_4_bits = weight_quant.num_bits == 4 and input_quant.num_bits == 4

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should check for dynamic input as well

@mgoin mgoin added quantization ready ONLY add when PR is ready to merge/full CI is needed labels May 18, 2025
@robertgshaw2-redhat
Copy link
Collaborator

👀

@dsikka dsikka force-pushed the nvfp4_emulation branch from 6980284 to f6c7914 Compare May 19, 2025 17:46
@dsikka dsikka changed the title [Quantization] Add compressed-tensors NVFP4 emulation support [Quantization] Add compressed-tensors NVFP4 support May 19, 2025
dsikka added 3 commits May 19, 2025 14:49
Signed-off-by: Dipika <[email protected]>
Signed-off-by: Dipika Sikka <[email protected]>
@mgoin
Copy link
Member

mgoin commented May 19, 2025

I still want the default behavior to run w4a4 as w4a16 on hardware where we can support the Marlin kernel, so users have the same experience as FP8. If you want to keep the emulation pathway for evals, could we hide it behind an env var?

@dsikka
Copy link
Contributor Author

dsikka commented May 20, 2025

I still want the default behavior to run w4a4 as w4a16 on hardware where we can support the Marlin kernel, so users have the same experience as FP8. If you want to keep the emulation pathway for evals, could we hide it behind an env var?

Yeah I was thinking of combining the two schemes in a follow-up but I can do it in this PR.

Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay LGTM to land emulation and coalesce with marlin later

@DarkLight1337
Copy link
Member

Please merge from main to fix the CI failures

@dsikka dsikka marked this pull request as draft May 23, 2025 16:53
@dsikka
Copy link
Contributor Author

dsikka commented May 23, 2025

Progress has gone fast enough that we can wait for the next ct release next week. Converting to draft until then

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
quantization ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants