Open
Description
Describe the bug
in flux multicontrolnet when i using 2 controlnet(https://huggingface.co/promeai/FLUX.1-controlnet-lineart-promeai and https://huggingface.co/InstantX/FLUX.1-dev-Controlnet-Canny/blob/main/config.json)
the lineart controlnet has 4 double layers and the canny controlnet has 5 double layers, we think the following code will have a negative impact on the effect.
Because in the transformer calculation, we directly took the length, and due to the calculation logic in Figure 1, the length of prome should be 4, but it was classified into the column with a length of 5.
Reproduction
import torch
from diffusers.utils import load_image
from diffusers.pipelines.flux.pipeline_flux_controlnet import FluxControlNetPipeline
from diffusers.models.controlnet_flux import FluxControlNetModel, FluxMultiControlNetModel
base_model = 'black-forest-labs/FLUX.1-dev'
# load controlnet models
controlnet_model_canny = 'InstantX/FLUX.1-dev-Controlnet-Canny'
controlnet_canny = FluxControlNetModel.from_pretrained(controlnet_model_canny, torch_dtype=torch.bfloat16)
controlnet_model_lineart = 'promeai/FLUX.1-controlnet-lineart-promeai'
controlnet_lineart = FluxControlNetModel.from_pretrained(controlnet_model_lineart, torch_dtype=torch.bfloat16)
controlnet_canny_lineart = FluxMultiControlNetModel([controlnet_canny, controlnet_lineart])
pipe = FluxControlNetPipeline.from_pretrained(base_model, controlnet=controlnet_canny_lineart, torch_dtype=torch.bfloat16)
pipe.to("cuda")
control_image_canny = load_image("one canny image")
control_image_lineart = load_image("one lineart image")
prompt = "A girl in city, 25 years old, cool, futuristic"
image = pipe(
prompt,
control_image=[control_image_canny, control_image_lineart],
controlnet_conditioning_scale=[0.6, 0.6],
num_inference_steps=28,
guidance_scale=3.5,
).images[0]
image.save("image.jpg")
Logs
No response
System Info
Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points.
- 🤗 Diffusers version: 0.31.0
- Platform: Linux-5.15.0-105-generic-x86_64-with-glibc2.31
- Running on Google Colab?: No
- Python version: 3.10.15
- PyTorch version (GPU?): 2.5.1+cu124 (True)
- Flax version (CPU?/GPU?/TPU?): not installed (NA)
- Jax version: not installed
- JaxLib version: not installed
- Huggingface_hub version: 0.26.2
- Transformers version: 4.46.2
- Accelerate version: 1.1.1
- PEFT version: 0.13.2
- Bitsandbytes version: not installed
- Safetensors version: 0.4.5
- xFormers version: not installed
- Accelerator: NVIDIA A100-SXM4-80GB, 81920 MiB
NVIDIA A100-SXM4-80GB, 81920 MiB - Using GPU in script?:
- Using distributed or parallel set-up in script?: