Why do we need to move image to torch.float32 in FluxPipeline? #11392
Unanswered
Podidiving
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
👋
I dont quite understand the logic behind moving
init_image
to torch.float32 formatdiffusers/src/diffusers/pipelines/flux/pipeline_flux_img2img.py
Line 829 in 026507c
because next the variable is used in
prepare_latents
method where it is moved to another dtype. Is this a legacy from stable diffusion vae model or is there some other idea behind it?Beta Was this translation helpful? Give feedback.
All reactions