Skip to content

[BugFix] Fix vllm_flash_attn install issues #17267

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Apr 28, 2025

Conversation

LucasWilkinson
Copy link
Collaborator

@LucasWilkinson LucasWilkinson commented Apr 27, 2025

This PR is a collection of fixes for vllm_flash_attn install issues, unfortunately the vllm_flash_attn install is fairly hacky/sensitive/complex right now. This will hopefully be fixed in the future if we move to a separate kernel library. Apologies for missing #17159 .

  1. Fix Python packaging edge cases #17159 introduced an __init__.py into the vllm_flash_attn, during the install process we copy over the __init__.py file from the vllm_flash_attn repo https://github.com/vllm-project/flash-attention. This creates a bad diff since vllm/vllm_flash_attn/__init__.py was now being tracked by git. PRs [Chore] added stubs for vllm_flash_attn during development mode #17228 and [Chore] ignore override default __init__.py when building from source #17260 attempted to fix this by not coping the __init__.py from vllm_flash_attn repo. This is error prone because it requires us to keep the stub files in-sync between the repos. I am assuming Fix Python packaging edge cases #17159 introduced an __init__.py to make sure vllm/vllm_flash_attn/fa_utils.py got packaged correctly since this is the only python file that would exist in that folder if __init__.py from the vllm_flash_attn repo was not copied over. As an alternative in this PR I moved that file to vllm/attention/utils/fa_utils.py, this way it always gets packaged regardless if vllm_flash_attn is available, meaning we can remove the __init__.py from the folder and rely on the one that gets copied from vllm_flash_attn repo (if its not present we dont care about packaging this folder anymore). (NOTE this PR reverts [Chore] added stubs for vllm_flash_attn during development mode #17228)

  2. [Perf]Optimize rotary_emb implementation to use Triton operator for improved inference performance #16457 and Add rotary triton operator to vllm_flash_attn flash-attention#64 introduced new python files to vllm_flash_attn but these were not getting properly picked up by setup.py, this PR now recursively globs for .py files in vllm_flash_attn making sure no files are inadvertently missed (FIX [Bug]: nightly version: ModuleNotFoundError: No module named 'vllm.vllm_flash_attn.layers' #17263), Note this PR supersedes: [Bugfix] Fix vllm_flash_attn rotary import  #17247 which achieves the same result albeit in more lines of code (shout-out to @jeejeelee and @aarnphm for highlighting the issue).

Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Aaron Pham <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Copy link

👋 Hi! Thank you for contributing to the vLLM project.

💬 Join our developer Slack at https://slack.vllm.ai to discuss your PR in #pr-reviews, coordinate on features in #feat- channels, or join special interest groups in #sig- channels.

Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can either: Add ready label to the PR or enable auto-merge.

🚀

Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Copy link
Collaborator

@aarnphm aarnphm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is clean. LGTM. I will close the other PR

Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
@houseroad
Copy link
Collaborator

@zhewenl, we will need to update the build script again. Probably it's me, since I will be the oncall.

@LucasWilkinson LucasWilkinson added the ready ONLY add when PR is ready to merge/full CI is needed label Apr 27, 2025
Copy link
Collaborator

@tlrmchlsmth tlrmchlsmth left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing this

@tlrmchlsmth tlrmchlsmth enabled auto-merge (squash) April 27, 2025 18:21
@WoosukKwon WoosukKwon disabled auto-merge April 28, 2025 00:27
@WoosukKwon WoosukKwon merged commit d8bccde into vllm-project:main Apr 28, 2025
88 of 90 checks passed
wuisawesome pushed a commit to character-tech/vllm that referenced this pull request Apr 28, 2025
Signed-off-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Aaron Pham <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ci/build ready ONLY add when PR is ready to merge/full CI is needed v1
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Bug]: nightly version: ModuleNotFoundError: No module named 'vllm.vllm_flash_attn.layers'
7 participants