Skip to content

dreambooth training script dose not make "unet" forler for each checkpointed directory #2480

Closed
@yoon28

Description

@yoon28

Describe the bug

according to this document, dreambooth training script should make "unet" folder for each checkpointed directory.
but it is not.

https://huggingface.co/docs/diffusers/main/en/training/dreambooth#performing-inference-using-a-saved-checkpoint

Reproduction

launch dreambooth training script
eg) CUDA_VISIBLE_DEVICES=0,1 accelerate launch train_dreambooth.py --pretrained_model_name_or_path="CompVis/stable-diffusion-v1-4" --instance_data_dir=/mnt/volatile/data --class_data_dir=/mnt/volatile/dreambooth/gen/person --output_dir=/mnt/volatile/dreambooth/log/ --with_prior_preservation --prior_loss_weight=1.0 --instance_prompt="ykj person" --class_prompt="person" --resolution=512 --train_batch_size=4 --gradient_accumulation_steps=1 --learning_rate=2e-6 --lr_scheduler="constant" --lr_warmup_steps=0 --num_class_images=250 --max_train_steps=1200 --train_text_encoder --checkpointing_steps=100

After that, check the folders that were checkpointed. And you can not see the "unet" folder.
Finally, I can not sample from checkpoints unlike this

Logs

No response

System Info

  • diffusers version: 0.14.0.dev0
  • Platform: Linux-3.10.0-1160.59.1.el7.x86_64-x86_64-with-glibc2.31
  • Python version: 3.10.9
  • PyTorch version (GPU?): 1.13.1+cu117 (True)
  • Huggingface_hub version: 0.12.1
  • Transformers version: 4.26.1
  • Accelerate version: 0.16.0
  • xFormers version: not installed
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Metadata

Metadata

Labels

bugSomething isn't workingstaleIssues that haven't received updates

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions