Skip to content

Commit 403a2c2

Browse files
authored
Merge branch 'main' into jax
2 parents a79f961 + db969cc commit 403a2c2

File tree

87 files changed

+1482
-1121
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

87 files changed

+1482
-1121
lines changed

.github/workflows/push_tests.yml

Lines changed: 13 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ jobs:
6060
runs-on: [single-gpu, nvidia-gpu, t4, ci]
6161
container:
6262
image: diffusers/diffusers-pytorch-cuda
63-
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0
63+
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --gpus 0 --privileged
6464
steps:
6565
- name: Checkout diffusers
6666
uses: actions/checkout@v3
@@ -69,6 +69,12 @@ jobs:
6969
- name: NVIDIA-SMI
7070
run: |
7171
nvidia-smi
72+
- name: Tailscale
73+
uses: huggingface/tailscale-action@v1
74+
with:
75+
authkey: ${{ secrets.TAILSCALE_SSH_AUTHKEY }}
76+
slackChannel: ${{ secrets.SLACK_CIFEEDBACK_CHANNEL }}
77+
slackToken: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
7278
- name: Install dependencies
7379
run: |
7480
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
@@ -87,6 +93,11 @@ jobs:
8793
-s -v -k "not Flax and not Onnx" \
8894
--make-reports=tests_pipeline_${{ matrix.module }}_cuda \
8995
tests/pipelines/${{ matrix.module }}
96+
- name: Tailscale Wait
97+
if: ${{ failure() || runner.debug == '1' }}
98+
uses: huggingface/tailscale-action@v1
99+
with:
100+
waitForSSH: true
90101
- name: Failure short reports
91102
if: ${{ failure() }}
92103
run: |
@@ -425,4 +436,4 @@ jobs:
425436
uses: actions/upload-artifact@v2
426437
with:
427438
name: examples_test_reports
428-
path: reports
439+
path: reports

docs/source/en/_toctree.yml

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -24,14 +24,12 @@
2424
title: Tutorials
2525
- sections:
2626
- sections:
27-
- local: using-diffusers/loading_overview
28-
title: Overview
2927
- local: using-diffusers/loading
30-
title: Load pipelines, models, and schedulers
31-
- local: using-diffusers/schedulers
32-
title: Load and compare different schedulers
28+
title: Load pipelines
3329
- local: using-diffusers/custom_pipeline_overview
3430
title: Load community pipelines and components
31+
- local: using-diffusers/schedulers
32+
title: Load schedulers and models
3533
- local: using-diffusers/using_safetensors
3634
title: Load safetensors
3735
- local: using-diffusers/other-formats

docs/source/en/using-diffusers/custom_pipeline_overview.md

Lines changed: 77 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -16,17 +16,19 @@ specific language governing permissions and limitations under the License.
1616

1717
## Community pipelines
1818

19-
Community pipelines are any [`DiffusionPipeline`] class that are different from the original implementation as specified in their paper (for example, the [`StableDiffusionControlNetPipeline`] corresponds to the [Text-to-Image Generation with ControlNet Conditioning](https://arxiv.org/abs/2302.05543) paper). They provide additional functionality or extend the original implementation of a pipeline.
19+
Community pipelines are any [`DiffusionPipeline`] class that are different from the original paper implementation (for example, the [`StableDiffusionControlNetPipeline`] corresponds to the [Text-to-Image Generation with ControlNet Conditioning](https://arxiv.org/abs/2302.05543) paper). They provide additional functionality or extend the original implementation of a pipeline.
2020

21-
There are many cool community pipelines like [Speech to Image](https://github.com/huggingface/diffusers/tree/main/examples/community#speech-to-image) or [Composable Stable Diffusion](https://github.com/huggingface/diffusers/tree/main/examples/community#composable-stable-diffusion), and you can find all the official community pipelines [here](https://github.com/huggingface/diffusers/tree/main/examples/community).
21+
There are many cool community pipelines like [Marigold Depth Estimation](https://github.com/huggingface/diffusers/tree/main/examples/community#marigold-depth-estimation) or [InstantID](https://github.com/huggingface/diffusers/tree/main/examples/community#instantid-pipeline), and you can find all the official community pipelines [here](https://github.com/huggingface/diffusers/tree/main/examples/community).
2222

23-
To load any community pipeline on the Hub, pass the repository id of the community pipeline to the `custom_pipeline` argument and the model repository where you'd like to load the pipeline weights and components from. For example, the example below loads a dummy pipeline from [`hf-internal-testing/diffusers-dummy-pipeline`](https://huggingface.co/hf-internal-testing/diffusers-dummy-pipeline/blob/main/pipeline.py) and the pipeline weights and components from [`google/ddpm-cifar10-32`](https://huggingface.co/google/ddpm-cifar10-32):
23+
There are two types of community pipelines, those stored on the Hugging Face Hub and those stored on Diffusers GitHub repository. Hub pipelines are completely customizable (scheduler, models, pipeline code, etc.) while Diffusers GitHub pipelines are only limited to custom pipeline code. Refer to this [table](./contribute_pipeline#share-your-pipeline) for a more detailed comparison of Hub vs GitHub community pipelines.
2424

25-
<Tip warning={true}>
25+
<hfoptions id="community">
26+
<hfoption id="Hub pipelines">
2627

27-
🔒 By loading a community pipeline from the Hugging Face Hub, you are trusting that the code you are loading is safe. Make sure to inspect the code online before loading and running it automatically!
28+
To load a Hugging Face Hub community pipeline, pass the repository id of the community pipeline to the `custom_pipeline` argument and the model repository where you'd like to load the pipeline weights and components from. For example, the example below loads a dummy pipeline from [hf-internal-testing/diffusers-dummy-pipeline](https://huggingface.co/hf-internal-testing/diffusers-dummy-pipeline/blob/main/pipeline.py) and the pipeline weights and components from [google/ddpm-cifar10-32](https://huggingface.co/google/ddpm-cifar10-32):
2829

29-
</Tip>
30+
> [!WARNING]
31+
> By loading a community pipeline from the Hugging Face Hub, you are trusting that the code you are loading is safe. Make sure to inspect the code online before loading and running it automatically!
3032
3133
```py
3234
from diffusers import DiffusionPipeline
@@ -36,7 +38,10 @@ pipeline = DiffusionPipeline.from_pretrained(
3638
)
3739
```
3840

39-
Loading an official community pipeline is similar, but you can mix loading weights from an official repository id and pass pipeline components directly. The example below loads the community [CLIP Guided Stable Diffusion](https://github.com/huggingface/diffusers/tree/main/examples/community#clip-guided-stable-diffusion) pipeline, and you can pass the CLIP model components directly to it:
41+
</hfoption>
42+
<hfoption id="GitHub pipelines">
43+
44+
To load a GitHub community pipeline, pass the repository id of the community pipeline to the `custom_pipeline` argument and the model repository where you you'd like to load the pipeline weights and components from. You can also load model components directly. The example below loads the community [CLIP Guided Stable Diffusion](https://github.com/huggingface/diffusers/tree/main/examples/community#clip-guided-stable-diffusion) pipeline and the CLIP model components.
4045

4146
```py
4247
from diffusers import DiffusionPipeline
@@ -56,9 +61,12 @@ pipeline = DiffusionPipeline.from_pretrained(
5661
)
5762
```
5863

64+
</hfoption>
65+
</hfoptions>
66+
5967
### Load from a local file
6068

61-
Community pipelines can also be loaded from a local file if you pass a file path instead. The path to the passed directory must contain a `pipeline.py` file that contains the pipeline class in order to successfully load it.
69+
Community pipelines can also be loaded from a local file if you pass a file path instead. The path to the passed directory must contain a pipeline.py file that contains the pipeline class.
6270

6371
```py
6472
pipeline = DiffusionPipeline.from_pretrained(
@@ -77,7 +85,7 @@ By default, community pipelines are loaded from the latest stable version of Dif
7785
<hfoptions id="version">
7886
<hfoption id="main">
7987

80-
For example, to load from the `main` branch:
88+
For example, to load from the main branch:
8189

8290
```py
8391
pipeline = DiffusionPipeline.from_pretrained(
@@ -93,7 +101,7 @@ pipeline = DiffusionPipeline.from_pretrained(
93101
</hfoption>
94102
<hfoption id="older version">
95103

96-
For example, to load from a previous version of Diffusers like `v0.25.0`:
104+
For example, to load from a previous version of Diffusers like v0.25.0:
97105

98106
```py
99107
pipeline = DiffusionPipeline.from_pretrained(
@@ -109,16 +117,57 @@ pipeline = DiffusionPipeline.from_pretrained(
109117
</hfoption>
110118
</hfoptions>
111119

120+
### Load with from_pipe
112121

113-
For more information about community pipelines, take a look at the [Community pipelines](custom_pipeline_examples) guide for how to use them and if you're interested in adding a community pipeline check out the [How to contribute a community pipeline](contribute_pipeline) guide!
122+
Community pipelines can also be loaded with the [`~DiffusionPipeline.from_pipe`] method which allows you to load and reuse multiple pipelines without any additional memory overhead (learn more in the [Reuse a pipeline](./loading#reuse-a-pipeline) guide). The memory requirement is determined by the largest single pipeline loaded.
123+
124+
For example, let's load a community pipeline that supports [long prompts with weighting](https://github.com/huggingface/diffusers/tree/main/examples/community#long-prompt-weighting-stable-diffusion) from a Stable Diffusion pipeline.
125+
126+
```py
127+
import torch
128+
from diffusers import DiffusionPipeline
129+
130+
pipe_sd = DiffusionPipeline.from_pretrained("emilianJR/CyberRealistic_V3", torch_dtype=torch.float16)
131+
pipe_sd.to("cuda")
132+
# load long prompt weighting pipeline
133+
pipe_lpw = DiffusionPipeline.from_pipe(
134+
pipe_sd,
135+
custom_pipeline="lpw_stable_diffusion",
136+
).to("cuda")
137+
138+
prompt = "cat, hiding in the leaves, ((rain)), zazie rainyday, beautiful eyes, macro shot, colorful details, natural lighting, amazing composition, subsurface scattering, amazing textures, filmic, soft light, ultra-detailed eyes, intricate details, detailed texture, light source contrast, dramatic shadows, cinematic light, depth of field, film grain, noise, dark background, hyperrealistic dslr film still, dim volumetric cinematic lighting"
139+
neg_prompt = "(deformed iris, deformed pupils, semi-realistic, cgi, 3d, render, sketch, cartoon, drawing, anime, mutated hands and fingers:1.4), (deformed, distorted, disfigured:1.3), poorly drawn, bad anatomy, wrong anatomy, extra limb, missing limb, floating limbs, disconnected limbs, mutation, mutated, ugly, disgusting, amputation"
140+
generator = torch.Generator(device="cpu").manual_seed(20)
141+
out_lpw = pipe_lpw(
142+
prompt,
143+
negative_prompt=neg_prompt,
144+
width=512,
145+
height=512,
146+
max_embeddings_multiples=3,
147+
num_inference_steps=50,
148+
generator=generator,
149+
).images[0]
150+
out_lpw
151+
```
152+
153+
<div class="flex gap-4">
154+
<div>
155+
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/from_pipe_lpw.png" />
156+
<figcaption class="mt-2 text-center text-sm text-gray-500">Stable Diffusion with long prompt weighting</figcaption>
157+
</div>
158+
<div>
159+
<img class="rounded-xl" src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/from_pipe_non_lpw.png" />
160+
<figcaption class="mt-2 text-center text-sm text-gray-500">Stable Diffusion</figcaption>
161+
</div>
162+
</div>
114163

115164
## Community components
116165

117166
Community components allow users to build pipelines that may have customized components that are not a part of Diffusers. If your pipeline has custom components that Diffusers doesn't already support, you need to provide their implementations as Python modules. These customized components could be a VAE, UNet, and scheduler. In most cases, the text encoder is imported from the Transformers library. The pipeline code itself can also be customized.
118167

119168
This section shows how users should use community components to build a community pipeline.
120169

121-
You'll use the [showlab/show-1-base](https://huggingface.co/showlab/show-1-base) pipeline checkpoint as an example. So, let's start loading the components:
170+
You'll use the [showlab/show-1-base](https://huggingface.co/showlab/show-1-base) pipeline checkpoint as an example.
122171

123172
1. Import and load the text encoder from Transformers:
124173

@@ -152,17 +201,17 @@ In steps 4 and 5, the custom [UNet](https://github.com/showlab/Show-1/blob/main/
152201

153202
</Tip>
154203

155-
4. Now you'll load a [custom UNet](https://github.com/showlab/Show-1/blob/main/showone/models/unet_3d_condition.py), which in this example, has already been implemented in the `showone_unet_3d_condition.py` [script](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/unet/showone_unet_3d_condition.py) for your convenience. You'll notice the `UNet3DConditionModel` class name is changed to `ShowOneUNet3DConditionModel` because [`UNet3DConditionModel`] already exists in Diffusers. Any components needed for the `ShowOneUNet3DConditionModel` class should be placed in the `showone_unet_3d_condition.py` script.
204+
4. Now you'll load a [custom UNet](https://github.com/showlab/Show-1/blob/main/showone/models/unet_3d_condition.py), which in this example, has already been implemented in [showone_unet_3d_condition.py](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/unet/showone_unet_3d_condition.py) for your convenience. You'll notice the [`UNet3DConditionModel`] class name is changed to `ShowOneUNet3DConditionModel` because [`UNet3DConditionModel`] already exists in Diffusers. Any components needed for the `ShowOneUNet3DConditionModel` class should be placed in showone_unet_3d_condition.py.
156205

157-
Once this is done, you can initialize the UNet:
206+
Once this is done, you can initialize the UNet:
158207

159-
```python
160-
from showone_unet_3d_condition import ShowOneUNet3DConditionModel
208+
```python
209+
from showone_unet_3d_condition import ShowOneUNet3DConditionModel
161210

162-
unet = ShowOneUNet3DConditionModel.from_pretrained(pipe_id, subfolder="unet")
163-
```
211+
unet = ShowOneUNet3DConditionModel.from_pretrained(pipe_id, subfolder="unet")
212+
```
164213

165-
5. Finally, you'll load the custom pipeline code. For this example, it has already been created for you in the `pipeline_t2v_base_pixel.py` [script](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/pipeline_t2v_base_pixel.py). This script contains a custom `TextToVideoIFPipeline` class for generating videos from text. Just like the custom UNet, any code needed for the custom pipeline to work should go in the `pipeline_t2v_base_pixel.py` script.
214+
5. Finally, you'll load the custom pipeline code. For this example, it has already been created for you in [pipeline_t2v_base_pixel.py](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/pipeline_t2v_base_pixel.py). This script contains a custom `TextToVideoIFPipeline` class for generating videos from text. Just like the custom UNet, any code needed for the custom pipeline to work should go in pipeline_t2v_base_pixel.py.
166215

167216
Once everything is in place, you can initialize the `TextToVideoIFPipeline` with the `ShowOneUNet3DConditionModel`:
168217

@@ -187,13 +236,16 @@ Push the pipeline to the Hub to share with the community!
187236
pipeline.push_to_hub("custom-t2v-pipeline")
188237
```
189238

190-
After the pipeline is successfully pushed, you need a couple of changes:
239+
After the pipeline is successfully pushed, you need to make a few changes:
191240

192-
1. Change the `_class_name` attribute in [`model_index.json`](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/model_index.json#L2) to `"pipeline_t2v_base_pixel"` and `"TextToVideoIFPipeline"`.
193-
2. Upload `showone_unet_3d_condition.py` to the `unet` [directory](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/unet/showone_unet_3d_condition.py).
194-
3. Upload `pipeline_t2v_base_pixel.py` to the pipeline base [directory](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/unet/showone_unet_3d_condition.py).
241+
1. Change the `_class_name` attribute in [model_index.json](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/model_index.json#L2) to `"pipeline_t2v_base_pixel"` and `"TextToVideoIFPipeline"`.
242+
2. Upload `showone_unet_3d_condition.py` to the [unet](https://huggingface.co/sayakpaul/show-1-base-with-code/blob/main/unet/showone_unet_3d_condition.py) subfolder.
243+
3. Upload `pipeline_t2v_base_pixel.py` to the pipeline [repository](https://huggingface.co/sayakpaul/show-1-base-with-code/tree/main).
195244

196-
To run inference, simply add the `trust_remote_code` argument while initializing the pipeline to handle all the "magic" behind the scenes.
245+
To run inference, add the `trust_remote_code` argument while initializing the pipeline to handle all the "magic" behind the scenes.
246+
247+
> [!WARNING]
248+
> As an additional precaution with `trust_remote_code=True`, we strongly encourage you to pass a commit hash to the `revision` parameter in [`~DiffusionPipeline.from_pretrained`] to make sure the code hasn't been updated with some malicious new lines of code (unless you fully trust the model owners).
197249
198250
```python
199251
from diffusers import DiffusionPipeline
@@ -221,25 +273,14 @@ video_frames = pipeline(
221273
).frames
222274
```
223275

224-
As an additional reference example, you can refer to the repository structure of [stabilityai/japanese-stable-diffusion-xl](https://huggingface.co/stabilityai/japanese-stable-diffusion-xl/), that makes use of the `trust_remote_code` feature:
276+
As an additional reference, take a look at the repository structure of [stabilityai/japanese-stable-diffusion-xl](https://huggingface.co/stabilityai/japanese-stable-diffusion-xl/) which also uses the `trust_remote_code` feature.
225277

226278
```python
227-
228279
from diffusers import DiffusionPipeline
229280
import torch
230281

231282
pipeline = DiffusionPipeline.from_pretrained(
232283
"stabilityai/japanese-stable-diffusion-xl", trust_remote_code=True
233284
)
234285
pipeline.to("cuda")
235-
236-
# if using torch < 2.0
237-
# pipeline.enable_xformers_memory_efficient_attention()
238-
239-
prompt = "柴犬、カラフルアート"
240-
241-
image = pipeline(prompt=prompt).images[0]
242286
```
243-
244-
> [!TIP]
245-
> When using `trust_remote_code=True`, it is also strongly encouraged to pass a commit hash as a `revision` to make sure the author of the models did not update the code with some malicious new lines (unless you fully trust the authors of the models).

0 commit comments

Comments
 (0)