Skip to content

Commit 26a8c00

Browse files
committed
Revert "[Community] AnimateDiff + Controlnet Pipeline (#5928)"
This reverts commit 821726d.
1 parent 3dc2362 commit 26a8c00

File tree

2 files changed

+0
-1202
lines changed

2 files changed

+0
-1202
lines changed

examples/community/README.md

Lines changed: 0 additions & 65 deletions
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,6 @@ prompt-to-prompt | change parts of a prompt and retain image structure (see [pap
5050
| Latent Consistency Interpolation Pipeline | Interpolate the latent space of Latent Consistency Models with multiple prompts | [Latent Consistency Interpolation Pipeline](#latent-consistency-interpolation-pipeline) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1pK3NrLWJSiJsBynLns1K1-IDTW9zbPvl?usp=sharing) | [Aryan V S](https://github.com/a-r-r-o-w) |
5151
| Regional Prompting Pipeline | Assign multiple prompts for different regions | [Regional Prompting Pipeline](#regional-prompting-pipeline) | - | [hako-mikan](https://github.com/hako-mikan) |
5252
| LDM3D-sr (LDM3D upscaler) | Upscale low resolution RGB and depth inputs to high resolution | [StableDiffusionUpscaleLDM3D Pipeline](https://github.com/estelleafl/diffusers/tree/ldm3d_upscaler_community/examples/community#stablediffusionupscaleldm3d-pipeline) | - | [Estelle Aflalo](https://github.com/estelleafl) |
53-
| AnimateDiff ControlNet Pipeline | Combines AnimateDiff with precise motion control using ControlNets | [AnimateDiff ControlNet Pipeline](#animatediff-controlnet-pipeline) | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1SKboYeGjEQmQPWoFC0aLYpBlYdHXkvAu?usp=sharing) | [Aryan V S](https://github.com/a-r-r-o-w) and [Edoardo Botta](https://github.com/EdoardoBotta) |
5453
| DemoFusion Pipeline | Implementation of [DemoFusion: Democratising High-Resolution Image Generation With No $$$](https://arxiv.org/abs/2311.16973) | [DemoFusion Pipeline](#DemoFusion) | - | [Ruoyi Du](https://github.com/RuoyiDu) |
5554

5655
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
@@ -2840,70 +2839,6 @@ The Pipeline supports `compel` syntax. Input prompts using the `compel` structur
28402839
* Reconstructed image:
28412840
* ![dps_generated_image](https://github.com/tongdaxu/Images/assets/22267548/b74f084d-93f4-4845-83d8-44c0fa758a5f)
28422841
2843-
### AnimateDiff ControlNet Pipeline
2844-
2845-
This pipeline combines AnimateDiff and ControlNet. Enjoy precise motion control for your videos! Refer to [this](https://github.com/huggingface/diffusers/issues/5866) issue for more details.
2846-
2847-
```py
2848-
import torch
2849-
from diffusers import AutoencoderKL, ControlNetModel, MotionAdapter
2850-
from diffusers.pipelines import DiffusionPipeline
2851-
from diffusers.schedulers import DPMSolverMultistepScheduler
2852-
from PIL import Image
2853-
2854-
motion_id = "guoyww/animatediff-motion-adapter-v1-5-2"
2855-
adapter = MotionAdapter.from_pretrained(motion_id)
2856-
controlnet = ControlNetModel.from_pretrained("lllyasviel/control_v11p_sd15_openpose", torch_dtype=torch.float16)
2857-
vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse", torch_dtype=torch.float16)
2858-
2859-
model_id = "SG161222/Realistic_Vision_V5.1_noVAE"
2860-
pipe = DiffusionPipeline.from_pretrained(
2861-
model_id,
2862-
motion_adapter=adapter,
2863-
controlnet=controlnet,
2864-
vae=vae,
2865-
custom_pipeline="pipeline_animatediff_controlnet",
2866-
).to(device="cuda", dtype=torch.float16)
2867-
pipe.scheduler = DPMSolverMultistepScheduler.from_pretrained(
2868-
model_id, subfolder="scheduler", clip_sample=False, timestep_spacing="linspace", steps_offset=1
2869-
)
2870-
pipe.enable_vae_slicing()
2871-
2872-
conditioning_frames = []
2873-
for i in range(1, 16 + 1):
2874-
conditioning_frames.append(Image.open(f"frame_{i}.png"))
2875-
2876-
prompt = "astronaut in space, dancing"
2877-
negative_prompt = "bad quality, worst quality, jpeg artifacts, ugly"
2878-
result = pipe(
2879-
prompt=prompt,
2880-
negative_prompt=negative_prompt,
2881-
width=512,
2882-
height=768,
2883-
conditioning_frames=conditioning_frames,
2884-
num_inference_steps=12,
2885-
).frames[0]
2886-
2887-
from diffusers.utils import export_to_gif
2888-
export_to_gif(result.frames[0], "result.gif")
2889-
```
2890-
2891-
<table>
2892-
<tr><td colspan="2" align=center><b>Conditioning Frames</b></td></tr>
2893-
<tr align=center>
2894-
<td align=center><img src="https://user-images.githubusercontent.com/7365912/265043418-23291941-864d-495a-8ba8-d02e05756396.gif" alt="input-frames"></td>
2895-
</tr>
2896-
<tr><td colspan="2" align=center><b>AnimateDiff model: SG161222/Realistic_Vision_V5.1_noVAE</b></td></tr>
2897-
<tr>
2898-
<td align=center><img src="https://github.com/huggingface/diffusers/assets/72266394/baf301e2-d03c-4129-bd84-203a1de2b2be" alt="gif-1"></td>
2899-
<td align=center><img src="https://github.com/huggingface/diffusers/assets/72266394/9f923475-ecaf-452b-92c8-4e42171182d8" alt="gif-2"></td>
2900-
</tr>
2901-
<tr><td colspan="2" align=center><b>AnimateDiff model: CardosAnime</b></td></tr>
2902-
<tr>
2903-
<td align=center><img src="https://github.com/huggingface/diffusers/assets/72266394/b2c41028-38a0-45d6-86ed-fec7446b87f7" alt="gif-1"></td>
2904-
<td align=center><img src="https://github.com/huggingface/diffusers/assets/72266394/eb7d2952-72e4-44fa-b664-077c79b4fc70" alt="gif-2"></td>
2905-
</tr>
2906-
</table>
29072842
### DemoFusion
29082843
This pipeline is the official implementation of [DemoFusion: Democratising High-Resolution Image Generation With No $$$](https://arxiv.org/abs/2311.16973).
29092844
The original repo can be found at [repo](https://github.com/PRIS-CV/DemoFusion).

0 commit comments

Comments
 (0)