-
Notifications
You must be signed in to change notification settings - Fork 6k
[SD3 LoRA] Fix list index out of range #8584
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Left a single comment.
@sayakpaul is this test case good enough? if it is, I can upload the loras to the internal testing repo. I ran the test locally and it passes with the PR and fails without it. |
tests/lora/test_lora_layers_sd3.py
Outdated
def test_sd3_lora(self): | ||
components = self.get_dummy_components() | ||
|
||
pipe = self.pipeline_class(**components) | ||
pipe = pipe.to(torch_device) | ||
pipe.set_progress_bar_config(disable=None) | ||
|
||
lora_model_id = "OzzyGT/lora_test" | ||
lora_filename = "lora_diffusers_format.safetensors" | ||
pipe.load_lora_weights(lora_model_id, weight_name=lora_filename) | ||
pipe.unload_lora_weights() | ||
|
||
lora_model_id = "OzzyGT/lora_test" | ||
lora_filename = "lora_peft_format.safetensors" | ||
pipe.load_lora_weights(lora_model_id, weight_name=lora_filename) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding the tests.
We should perhaps make this a SLOW one or require this test to have a GPU (@require_torch_gpu
). And also add a comment to explain why we had to add this test. I think adding a mention of this PR is just fine.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a ton for this. I have left one minor comment but it's not a blocker.
any reason why this was not merged and included in diffusers==0.29.1? |
It left some crucial checks and tests. |
@sayakpaul I don't have write permissions in Just in case, these are tiny loras (2kb) that I made using the |
"vae": vae, | ||
} | ||
|
||
def test_sd3_lora(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@asomoza I think just this test could be marked with "require_torch_gpu" and be made a part of this class:
class SD3LoRATests(unittest.TestCase): |
I think that will be good enough.
Once this is done, feel free to merge the PR :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a single comment: https://github.com/huggingface/diffusers/pull/8584/files#r1649020147
* fix * add check * key present is checked before * test case draft * aply suggestions * changed testing repo, back to old class * forgot docstring --------- Co-authored-by: YiYi Xu <[email protected]> Co-authored-by: Sayak Paul <[email protected]>
What does this PR do?
For some reason, the LoRA state_dict gets saved with both formats, PEFT and Diffusers, this PR add the
convert_unet_state_dict_to_peft
to the loading function so it works with both.Fixes #8579 #8565
Who can review?
Anyone in the community is free to review