Skip to content

[INTEL_HPU] Enable FusedBlockMultiTransformerHPU #10514

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
May 19, 2025

Conversation

zongwave
Copy link
Contributor

@zongwave zongwave commented Apr 28, 2025

add FusedBlockMultiTransformerHPU and prepare_input_hpu

atten output post-process

FusedBlockMultiTransformerHPU opti for-if-else

kv_head update for HPU shape

fix benchmark can't run with bigger bs issue

Before submitting

  • Lint code. If there are lint issues, please format the code first.
# Install and register `pre-commit` in the project folder
pip install pre-commit && pre-commit install

# Process previous code files separately
pre-commit run --file XXXX.py
  • Add test cases into tests folder. If there are codecov issues, please add tests cases first.

PR types

New features

PR changes

Models

Description

Enable Intel HPU FusedBlockMultiTransformer

Copy link

paddle-bot bot commented Apr 28, 2025

Thanks for your contribution!

Copy link

codecov bot commented Apr 28, 2025

Codecov Report

Attention: Patch coverage is 0% with 99 lines in your changes missing coverage. Please review.

Project coverage is 48.62%. Comparing base (ddcb722) to head (1cd6714).
Report is 3 commits behind head on develop.

Current head 1cd6714 differs from pull request most recent head 3603c0a

Please upload reports for the commit 3603c0a to get more accurate results.

Files with missing lines Patch % Lines
...erimental/transformers/fused_transformer_layers.py 0.00% 68 Missing ⚠️
...dlenlp/experimental/transformers/llama/modeling.py 0.00% 31 Missing ⚠️

❌ Your patch status has failed because the patch coverage (0.00%) is below the target coverage (80.00%). You can increase the patch coverage or adjust the target coverage.
❌ Your project status has failed because the head coverage (48.62%) is below the target coverage (58.00%). You can increase the head coverage or adjust the target coverage.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop   #10514      +/-   ##
===========================================
+ Coverage    46.94%   48.62%   +1.67%     
===========================================
  Files          799      768      -31     
  Lines       132348   127043    -5305     
===========================================
- Hits         62137    61780     -357     
+ Misses       70211    65263    -4948     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@zongwave zongwave force-pushed the block_attn branch 3 times, most recently from 1cd6714 to e683b1b Compare April 29, 2025 05:55
Copy link
Collaborator

@DrownFish19 DrownFish19 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

add FusedBlockMultiTransformerHPU and prepare_input_hpu

atten output post-process

FusedBlockMultiTransformerHPU opti for-if-else

kv_head update for HPU shape

fix benchmark can't run with bigger bs issue
@ZHUI ZHUI merged commit 5c482b6 into PaddlePaddle:develop May 19, 2025
8 of 10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants