Skip to content

use_safetensors parameter not passed to submodules when loading pipeline #9576

Closed
@elismasilva

Description

@elismasilva

Describe the bug

When a model is loading from_pretrained, if the model is .bin and not .safetensors i got warnings like

"An error occurred while trying to fetch models/stablediffusionapi/yamermix-v8-vae: Error no file named diffusion_pytorch_model.safetensors found in directory models/stablediffusionapi/yamermix-v8-vae.
Defaulting to unsafe serialization. Pass allow_pickle=False to raise an error instead"

But is expected if i pass use_safetensors=False to supress this warning.

So i notice that after this line is need to pass

cached_folder=cached_folder,
use_safetensors variable

And after this line repass the parameter

loading_kwargs["variant"] = model_variants.pop(name, None)

Reproduction

from diffusers import StableDiffusionXLPipeline
import torch
pipe = StableDiffusionXLPipeline.from_pretrained('stablediffusionapi/yamermix-v8-vae', torch_dtype=torch.float16)

Logs

No response

System Info

  • 🤗 Diffusers version: 0.31.0.dev0
  • Platform: Windows-10-10.0.19045-SP0
  • Running on Google Colab?: No
  • Python version: 3.10.11
  • PyTorch version (GPU?): 2.4.0+cu121 (True)
  • Flax version (CPU?/GPU?/TPU?): not installed (NA)
  • Jax version: not installed
  • JaxLib version: not installed
  • Huggingface_hub version: 0.24.5
  • Transformers version: 4.40.1
  • Accelerate version: 0.29.3
  • PEFT version: 0.12.0
  • Bitsandbytes version: 0.43.1
  • Safetensors version: 0.4.4
  • xFormers version: 0.0.27.post2
  • Accelerator: NVIDIA GeForce RTX 3060 Ti, 8192 MiB
  • Using GPU in script?:
  • Using distributed or parallel set-up in script?:

Who can help?

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions