-
Notifications
You must be signed in to change notification settings - Fork 6k
Allow users to save SDXL LoRA weights for only one text encoder #7607
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow users to save SDXL LoRA weights for only one text encoder #7607
Conversation
The method checks if at least one of unet, text_encoder and text_encoder_2 lora weights are passed, which was not reflected in the implentation.
text_encoder_2_lora_layers (`Dict[str, torch.nn.Module]` or `Dict[str, torch.Tensor]`): | ||
State dict of the LoRA layers corresponding to the `text_encoder_2`. Must explicitly pass the text | ||
encoder LoRA state dict because it comes from 🤗 Transformers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's there:
diffusers/src/diffusers/loaders/lora.py
Line 1375 in 7e39516
text_encoder_2_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None, |
You need to look at the right class:
https://github.com/huggingface/diffusers/blob/7e39516627c69b71f8b21a2b53689028d4733b72/src/diffusers/loaders/lora.py#L1288C7-L1288C39
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, this is what I aimed to fix.
Just added the missing parameter in the documentation (to match the signature) and split the following if condition:
diffusers/src/diffusers/loaders/lora.py
Line 1418 in 7e39516
if text_encoder_lora_layers and text_encoder_2_lora_layers: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for explaining! LGTM.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for looking into it in record time :)
Cheers
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
can you fix the tests? |
ohh the failing test actually might not be relevant here - will take a look on our end! |
Will merge after the CI is green (barring the failing test). |
This issue has been automatically marked as stale because it has not had recent activity. If you think this still needs to be addressed please comment on this thread. Please note that issues that do not follow the contributing guidelines are likely to be ignored. |
thank you! |
SDXL LoRA weights for text encoders should be decoupled on save The method checks if at least one of unet, text_encoder and text_encoder_2 lora weights are passed, which was not reflected in the implentation. Co-authored-by: Sayak Paul <[email protected]> Co-authored-by: YiYi Xu <[email protected]>
What does this PR do?
Allow users to save only the LoRA weights for one text encoder if desired, often used in the Dreambooth realm to avoid overcooking the model too quickly.
Also, the method checks if at least one of
unet
,text_encoder
ortext_encoder_2
lora weights is passed, which was not reflected in the implementation.Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
I'm thinking about @sayakpaul since it's a topic close to training examples.