Skip to content

Commit 9a23c53

Browse files
authored
Merge branch 'main' into ays
2 parents 967ee64 + 5915c29 commit 9a23c53

File tree

50 files changed

+1154
-578
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

50 files changed

+1154
-578
lines changed

.github/workflows/nightly_tests.yml

Lines changed: 22 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ env:
1919
jobs:
2020
setup_torch_cuda_pipeline_matrix:
2121
name: Setup Torch Pipelines Matrix
22-
runs-on: ubuntu-latest
22+
runs-on: diffusers/diffusers-pytorch-cpu
2323
outputs:
2424
pipeline_test_matrix: ${{ steps.fetch_pipeline_matrix.outputs.pipeline_test_matrix }}
2525
steps:
@@ -67,19 +67,19 @@ jobs:
6767
fetch-depth: 2
6868
- name: NVIDIA-SMI
6969
run: nvidia-smi
70-
70+
7171
- name: Install dependencies
7272
run: |
7373
python -m venv /opt/venv && export PATH="/opt/venv/bin:$PATH"
7474
python -m uv pip install -e [quality,test]
7575
python -m uv pip install accelerate@git+https://github.com/huggingface/accelerate.git
7676
python -m uv pip install pytest-reportlog
77-
77+
7878
- name: Environment
7979
run: |
8080
python utils/print_env.py
81-
82-
- name: Nightly PyTorch CUDA checkpoint (pipelines) tests
81+
82+
- name: Nightly PyTorch CUDA checkpoint (pipelines) tests
8383
env:
8484
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
8585
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
@@ -88,9 +88,9 @@ jobs:
8888
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
8989
-s -v -k "not Flax and not Onnx" \
9090
--make-reports=tests_pipeline_${{ matrix.module }}_cuda \
91-
--report-log=tests_pipeline_${{ matrix.module }}_cuda.log \
91+
--report-log=tests_pipeline_${{ matrix.module }}_cuda.log \
9292
tests/pipelines/${{ matrix.module }}
93-
93+
9494
- name: Failure short reports
9595
if: ${{ failure() }}
9696
run: |
@@ -103,7 +103,7 @@ jobs:
103103
with:
104104
name: pipeline_${{ matrix.module }}_test_reports
105105
path: reports
106-
106+
107107
- name: Generate Report and Notify Channel
108108
if: always()
109109
run: |
@@ -139,7 +139,7 @@ jobs:
139139
run: python utils/print_env.py
140140

141141
- name: Run nightly PyTorch CUDA tests for non-pipeline modules
142-
if: ${{ matrix.module != 'examples'}}
142+
if: ${{ matrix.module != 'examples'}}
143143
env:
144144
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
145145
# https://pytorch.org/docs/stable/notes/randomness.html#avoiding-nondeterministic-algorithms
@@ -148,7 +148,7 @@ jobs:
148148
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
149149
-s -v -k "not Flax and not Onnx" \
150150
--make-reports=tests_torch_${{ matrix.module }}_cuda \
151-
--report-log=tests_torch_${{ matrix.module }}_cuda.log \
151+
--report-log=tests_torch_${{ matrix.module }}_cuda.log \
152152
tests/${{ matrix.module }}
153153
154154
- name: Run nightly example tests with Torch
@@ -161,13 +161,13 @@ jobs:
161161
python -m uv pip install peft@git+https://github.com/huggingface/peft.git
162162
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
163163
-s -v --make-reports=examples_torch_cuda \
164-
--report-log=examples_torch_cuda.log \
164+
--report-log=examples_torch_cuda.log \
165165
examples/
166166
167167
- name: Failure short reports
168168
if: ${{ failure() }}
169169
run: |
170-
cat reports/tests_torch_${{ matrix.module }}_cuda_stats.txt
170+
cat reports/tests_torch_${{ matrix.module }}_cuda_stats.txt
171171
cat reports/tests_torch_${{ matrix.module }}_cuda_failures_short.txt
172172
173173
- name: Test suite reports artifacts
@@ -218,13 +218,13 @@ jobs:
218218
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
219219
-s -v -k "not Flax and not Onnx" \
220220
--make-reports=tests_torch_lora_cuda \
221-
--report-log=tests_torch_lora_cuda.log \
221+
--report-log=tests_torch_lora_cuda.log \
222222
tests/lora
223-
223+
224224
- name: Failure short reports
225225
if: ${{ failure() }}
226226
run: |
227-
cat reports/tests_torch_lora_cuda_stats.txt
227+
cat reports/tests_torch_lora_cuda_stats.txt
228228
cat reports/tests_torch_lora_cuda_failures_short.txt
229229
230230
- name: Test suite reports artifacts
@@ -239,12 +239,12 @@ jobs:
239239
run: |
240240
pip install slack_sdk tabulate
241241
python scripts/log_reports.py >> $GITHUB_STEP_SUMMARY
242-
242+
243243
run_flax_tpu_tests:
244244
name: Nightly Flax TPU Tests
245245
runs-on: docker-tpu
246246
if: github.event_name == 'schedule'
247-
247+
248248
container:
249249
image: diffusers/diffusers-flax-tpu
250250
options: --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/ --privileged
@@ -274,7 +274,7 @@ jobs:
274274
python -m pytest -n 0 \
275275
-s -v -k "Flax" \
276276
--make-reports=tests_flax_tpu \
277-
--report-log=tests_flax_tpu.log \
277+
--report-log=tests_flax_tpu.log \
278278
tests/
279279
280280
- name: Failure short reports
@@ -302,7 +302,7 @@ jobs:
302302
container:
303303
image: diffusers/diffusers-onnxruntime-cuda
304304
options: --gpus 0 --shm-size "16gb" --ipc host -v /mnt/hf_cache:/mnt/cache/
305-
305+
306306
steps:
307307
- name: Checkout diffusers
308308
uses: actions/checkout@v3
@@ -321,15 +321,15 @@ jobs:
321321
322322
- name: Environment
323323
run: python utils/print_env.py
324-
324+
325325
- name: Run nightly ONNXRuntime CUDA tests
326326
env:
327327
HUGGING_FACE_HUB_TOKEN: ${{ secrets.HUGGING_FACE_HUB_TOKEN }}
328328
run: |
329329
python -m pytest -n 1 --max-worker-restart=0 --dist=loadfile \
330330
-s -v -k "Onnx" \
331331
--make-reports=tests_onnx_cuda \
332-
--report-log=tests_onnx_cuda.log \
332+
--report-log=tests_onnx_cuda.log \
333333
tests/
334334
335335
- name: Failure short reports
@@ -344,7 +344,7 @@ jobs:
344344
with:
345345
name: ${{ matrix.config.report }}_test_reports
346346
path: reports
347-
347+
348348
- name: Generate Report and Notify Channel
349349
if: always()
350350
run: |

.github/workflows/push_tests.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ env:
2121
jobs:
2222
setup_torch_cuda_pipeline_matrix:
2323
name: Setup Torch Pipelines CUDA Slow Tests Matrix
24-
runs-on: ubuntu-latest
24+
runs-on: diffusers/diffusers-pytorch-cpu
2525
outputs:
2626
pipeline_test_matrix: ${{ steps.fetch_pipeline_matrix.outputs.pipeline_test_matrix }}
2727
steps:

.github/workflows/ssh-runner.yml

Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
name: SSH into runners
2+
3+
on:
4+
workflow_dispatch:
5+
inputs:
6+
runner_type:
7+
description: 'Type of runner to test (a10 or t4)'
8+
required: true
9+
docker_image:
10+
description: 'Name of the Docker image'
11+
required: true
12+
13+
env:
14+
IS_GITHUB_CI: "1"
15+
HF_HUB_READ_TOKEN: ${{ secrets.HF_HUB_READ_TOKEN }}
16+
HF_HOME: /mnt/cache
17+
DIFFUSERS_IS_CI: yes
18+
OMP_NUM_THREADS: 8
19+
MKL_NUM_THREADS: 8
20+
RUN_SLOW: yes
21+
22+
jobs:
23+
ssh_runner:
24+
name: "SSH"
25+
runs-on: [single-gpu, nvidia-gpu, "${{ github.event.inputs.runner_type }}", ci]
26+
container:
27+
image: ${{ github.event.inputs.docker_image }}
28+
options: --gpus all --privileged --ipc host -v /mnt/cache/.cache/huggingface:/mnt/cache/
29+
30+
steps:
31+
- name: Checkout diffusers
32+
uses: actions/checkout@v3
33+
with:
34+
fetch-depth: 2
35+
36+
- name: NVIDIA-SMI
37+
run: |
38+
nvidia-smi
39+
40+
- name: Tailscale # In order to be able to SSH when a test fails
41+
uses: huggingface/tailscale-action@v1
42+
with:
43+
authkey: ${{ secrets.TAILSCALE_SSH_AUTHKEY }}
44+
slackChannel: ${{ secrets.SLACK_CIFEEDBACK_CHANNEL }}
45+
slackToken: ${{ secrets.SLACK_CIFEEDBACK_BOT_TOKEN }}
46+
waitForSSH: true

docs/source/en/_toctree.yml

Lines changed: 0 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -87,10 +87,6 @@
8787
title: Shap-E
8888
- local: using-diffusers/diffedit
8989
title: DiffEdit
90-
- local: using-diffusers/custom_pipeline_examples
91-
title: Community pipelines
92-
- local: using-diffusers/contribute_pipeline
93-
title: Contribute a community pipeline
9490
- local: using-diffusers/inference_with_lcm_lora
9591
title: Latent Consistency Model-LoRA
9692
- local: using-diffusers/inference_with_lcm

docs/source/en/conceptual/contribution.md

Lines changed: 65 additions & 22 deletions
Original file line numberDiff line numberDiff line change
@@ -198,38 +198,81 @@ Anything displayed on [the official Diffusers doc page](https://huggingface.co/d
198198

199199
Please have a look at [this page](https://github.com/huggingface/diffusers/tree/main/docs) on how to verify changes made to the documentation locally.
200200

201-
202201
### 6. Contribute a community pipeline
203202

204-
[Pipelines](https://huggingface.co/docs/diffusers/api/pipelines/overview) are usually the first point of contact between the Diffusers library and the user.
205-
Pipelines are examples of how to use Diffusers [models](https://huggingface.co/docs/diffusers/api/models/overview) and [schedulers](https://huggingface.co/docs/diffusers/api/schedulers/overview).
206-
We support two types of pipelines:
203+
> [!TIP]
204+
> Read the [Community pipelines](../using-diffusers/custom_pipeline_overview#community-pipelines) guide to learn more about the difference between a GitHub and Hugging Face Hub community pipeline. If you're interested in why we have community pipelines, take a look at GitHub Issue [#841](https://github.com/huggingface/diffusers/issues/841) (basically, we can't maintain all the possible ways diffusion models can be used for inference but we also don't want to prevent the community from building them).
205+
206+
Contributing a community pipeline is a great way to share your creativity and work with the community. It lets you build on top of the [`DiffusionPipeline`] so that anyone can load and use it by setting the `custom_pipeline` parameter. This section will walk you through how to create a simple pipeline where the UNet only does a single forward pass and calls the scheduler once (a "one-step" pipeline).
207+
208+
1. Create a one_step_unet.py file for your community pipeline. This file can contain whatever package you want to use as long as it's installed by the user. Make sure you only have one pipeline class that inherits from [`DiffusionPipeline`] to load model weights and the scheduler configuration from the Hub. Add a UNet and scheduler to the `__init__` function.
209+
210+
You should also add the `register_modules` function to ensure your pipeline and its components can be saved with [`~DiffusionPipeline.save_pretrained`].
211+
212+
```py
213+
from diffusers import DiffusionPipeline
214+
import torch
215+
216+
class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
217+
def __init__(self, unet, scheduler):
218+
super().__init__()
219+
220+
self.register_modules(unet=unet, scheduler=scheduler)
221+
```
222+
223+
1. In the forward pass (which we recommend defining as `__call__`), you can add any feature you'd like. For the "one-step" pipeline, create a random image and call the UNet and scheduler once by setting `timestep=1`.
224+
225+
```py
226+
from diffusers import DiffusionPipeline
227+
import torch
228+
229+
class UnetSchedulerOneForwardPipeline(DiffusionPipeline):
230+
def __init__(self, unet, scheduler):
231+
super().__init__()
232+
233+
self.register_modules(unet=unet, scheduler=scheduler)
207234

208-
- Official Pipelines
209-
- Community Pipelines
235+
def __call__(self):
236+
image = torch.randn(
237+
(1, self.unet.config.in_channels, self.unet.config.sample_size, self.unet.config.sample_size),
238+
)
239+
timestep = 1
240+
241+
model_output = self.unet(image, timestep).sample
242+
scheduler_output = self.scheduler.step(model_output, timestep, image).prev_sample
243+
244+
return scheduler_output
245+
```
210246

211-
Both official and community pipelines follow the same design and consist of the same type of components.
247+
Now you can run the pipeline by passing a UNet and scheduler to it or load pretrained weights if the pipeline structure is identical.
248+
249+
```py
250+
from diffusers import DDPMScheduler, UNet2DModel
251+
252+
scheduler = DDPMScheduler()
253+
unet = UNet2DModel()
254+
255+
pipeline = UnetSchedulerOneForwardPipeline(unet=unet, scheduler=scheduler)
256+
output = pipeline()
257+
# load pretrained weights
258+
pipeline = UnetSchedulerOneForwardPipeline.from_pretrained("google/ddpm-cifar10-32", use_safetensors=True)
259+
output = pipeline()
260+
```
212261

213-
Official pipelines are tested and maintained by the core maintainers of Diffusers. Their code
214-
resides in [src/diffusers/pipelines](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines).
215-
In contrast, community pipelines are contributed and maintained purely by the **community** and are **not** tested.
216-
They reside in [examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) and while they can be accessed via the [PyPI diffusers package](https://pypi.org/project/diffusers/), their code is not part of the PyPI distribution.
262+
You can either share your pipeline as a GitHub community pipeline or Hub community pipeline.
217263

218-
The reason for the distinction is that the core maintainers of the Diffusers library cannot maintain and test all
219-
possible ways diffusion models can be used for inference, but some of them may be of interest to the community.
220-
Officially released diffusion pipelines,
221-
such as Stable Diffusion are added to the core src/diffusers/pipelines package which ensures
222-
high quality of maintenance, no backward-breaking code changes, and testing.
223-
More bleeding edge pipelines should be added as community pipelines. If usage for a community pipeline is high, the pipeline can be moved to the official pipelines upon request from the community. This is one of the ways we strive to be a community-driven library.
264+
<hfoptions id="pipeline type">
265+
<hfoption id="GitHub pipeline">
224266

225-
To add a community pipeline, one should add a <name-of-the-community>.py file to [examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) and adapt the [examples/community/README.md](https://github.com/huggingface/diffusers/tree/main/examples/community/README.md) to include an example of the new pipeline.
267+
Share your GitHub pipeline by opening a pull request on the Diffusers [repository](https://github.com/huggingface/diffusers) and add the one_step_unet.py file to the [examples/community](https://github.com/huggingface/diffusers/tree/main/examples/community) subfolder.
226268

227-
An example can be seen [here](https://github.com/huggingface/diffusers/pull/2400).
269+
</hfoption>
270+
<hfoption id="Hub pipeline">
228271

229-
Community pipeline PRs are only checked at a superficial level and ideally they should be maintained by their original authors.
272+
Share your Hub pipeline by creating a model repository on the Hub and uploading the one_step_unet.py file to it.
230273

231-
Contributing a community pipeline is a great way to understand how Diffusers models and schedulers work. Having contributed a community pipeline is usually the first stepping stone to contributing an official pipeline to the
232-
core package.
274+
</hfoption>
275+
</hfoptions>
233276

234277
### 7. Contribute to training examples
235278

0 commit comments

Comments
 (0)