You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: examples/community/README.md
+65Lines changed: 65 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -50,6 +50,7 @@ prompt-to-prompt | change parts of a prompt and retain image structure (see [pap
50
50
| Latent Consistency Interpolation Pipeline | Interpolate the latent space of Latent Consistency Models with multiple prompts |[Latent Consistency Interpolation Pipeline](#latent-consistency-interpolation-pipeline)|[](https://colab.research.google.com/drive/1pK3NrLWJSiJsBynLns1K1-IDTW9zbPvl?usp=sharing)|[Aryan V S](https://github.com/a-r-r-o-w)|
51
51
| Regional Prompting Pipeline | Assign multiple prompts for different regions |[Regional Prompting Pipeline](#regional-prompting-pipeline)| - |[hako-mikan](https://github.com/hako-mikan)|
52
52
| LDM3D-sr (LDM3D upscaler) | Upscale low resolution RGB and depth inputs to high resolution | [StableDiffusionUpscaleLDM3D Pipeline](https://github.com/estelleafl/diffusers/tree/ldm3d_upscaler_community/examples/community#stablediffusionupscaleldm3d-pipeline) | - | [Estelle Aflalo](https://github.com/estelleafl) |
53
+
| AnimateDiff ControlNet Pipeline | Combines AnimateDiff with precise motion control using ControlNets |[AnimateDiff ControlNet Pipeline](#animatediff-controlnet-pipeline)|[](https://colab.research.google.com/drive/1SKboYeGjEQmQPWoFC0aLYpBlYdHXkvAu?usp=sharing)|[Aryan V S](https://github.com/a-r-r-o-w) and [Edoardo Botta](https://github.com/EdoardoBotta)|
53
54
| DemoFusion Pipeline | Implementation of [DemoFusion: Democratising High-Resolution Image Generation With No $$$](https://arxiv.org/abs/2311.16973)|[DemoFusion Pipeline](#DemoFusion)| - |[Ruoyi Du](https://github.com/RuoyiDu)|
54
55
55
56
To load a custom pipeline you just need to pass the `custom_pipeline` argument to `DiffusionPipeline`, as one of the files in `diffusers/examples/community`. Feel free to send a PR with your own pipelines, we will merge them quickly.
@@ -2839,6 +2840,70 @@ The Pipeline supports `compel` syntax. Input prompts using the `compel` structur
This pipeline combines AnimateDiff and ControlNet. Enjoy precise motion control for your videos! Refer to [this](https://github.com/huggingface/diffusers/issues/5866) issue for more details.
2846
+
2847
+
```py
2848
+
import torch
2849
+
from diffusers import AutoencoderKL, ControlNetModel, MotionAdapter
2850
+
from diffusers.pipelines import DiffusionPipeline
2851
+
from diffusers.schedulers import DPMSolverMultistepScheduler
This pipeline is the official implementation of [DemoFusion: Democratising High-Resolution Image Generation With No $$$](https://arxiv.org/abs/2311.16973).
2844
2909
The original repo can be found at [repo](https://github.com/PRIS-CV/DemoFusion).
0 commit comments