-
Notifications
You must be signed in to change notification settings - Fork 6k
[SD3 dreambooth lora] smol fix to checkpoint saving #9993
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
elif isinstance(model, type(unwrap_model(text_encoder_one))): # or text_encoder_two | ||
# check hidden size to distinguish between text_encoder_one and two | ||
hidden_size = unwrap_model(model).config.hidden_size | ||
if hidden_size == 768: | ||
text_encoder_one_lora_layers_to_save = get_peft_model_state_dict(model) | ||
elif hidden_size == 1280: | ||
text_encoder_two_lora_layers_to_save = get_peft_model_state_dict(model) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works for me. We cannot distinguish with the classes here because both have the same class. Maybe this reasoning as a comment?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for fixing!
* smol change to fix checkpoint saving & resuming (as done in train_dreambooth_sd3.py) * style * modify comment to explain reasoning behind hidden size check
* smol change to fix checkpoint saving & resuming (as done in train_dreambooth_sd3.py) * style * modify comment to explain reasoning behind hidden size check
minor change to fix saving of text-encoder layers in lora training script, as we have in place in the dreambooth training script.
related to #8955