-
-
Notifications
You must be signed in to change notification settings - Fork 7.7k
Sm100 blockwise fp8 swap ab #18564
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Sm100 blockwise fp8 swap ab #18564
Conversation
👋 Hi! Thank you for contributing to the vLLM project. 💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels. Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can either: Add 🚀 |
Signed-off-by: Lain <[email protected]>
Signed-off-by: Lain <[email protected]>
559e27d
to
83cea35
Compare
Thanks @IwakuraRein, #18778 should be separate as it only touches |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The heuristics are fairly difficult to read now due to separating the tile scalars from the MmaTileShape, but overall these changes seem reasonable. Could you share an accuracy eval as a smoke test since we don't have sm100 in CI?
@mgoin Hi. I have run the tests on following problem shapes and compared to the triton kernel
|
Add the swap and transpose tensor A/B path to sm100 block-wise fp8 GEMM. This reduces the overhead of extra paddings.
Also add a basic heuristic.
Note: looks like 18778 is duplicate with part of this PR.