Skip to content

Commit 540399f

Browse files
yiyixuxuyiyixuxuKKIEEKsunovividsayakpaul
authored
add PAG support (#7944)
* first draft --------- Co-authored-by: yiyixuxu <yixu310@gmail,com> Co-authored-by: Junhwa Song <[email protected]> Co-authored-by: Ahn Donghoon (μ•ˆλ™ν›ˆ / suno) <[email protected]> Co-authored-by: Sayak Paul <[email protected]> Co-authored-by: Steven Liu <[email protected]>
1 parent f088027 commit 540399f

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

42 files changed

+9095
-492
lines changed

β€Ždocs/source/en/_toctree.yml

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -81,6 +81,8 @@
8181
title: Kandinsky
8282
- local: using-diffusers/ip_adapter
8383
title: IP-Adapter
84+
- local: using-diffusers/pag
85+
title: PAG
8486
- local: using-diffusers/controlnet
8587
title: ControlNet
8688
- local: using-diffusers/t2i_adapter
@@ -322,6 +324,8 @@
322324
title: MultiDiffusion
323325
- local: api/pipelines/musicldm
324326
title: MusicLDM
327+
- local: api/pipelines/pag
328+
title: PAG
325329
- local: api/pipelines/paint_by_example
326330
title: Paint by Example
327331
- local: api/pipelines/pia
Lines changed: 41 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,41 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# Perturbed-Attention Guidance
14+
15+
[Perturbed-Attention Guidance (PAG)](https://ku-cvlab.github.io/Perturbed-Attention-Guidance/) is a new diffusion sampling guidance that improves sample quality across both unconditional and conditional settings, achieving this without requiring further training or the integration of external modules.
16+
17+
PAG was introduced in [Self-Rectifying Diffusion Sampling with Perturbed-Attention Guidance](https://huggingface.co/papers/2403.17377) by Donghoon Ahn, Hyoungwon Cho, Jaewon Min, Wooseok Jang, Jungwoo Kim, SeonHwa Kim, Hyun Hee Park, Kyong Hwan Jin and Seungryong Kim.
18+
19+
The abstract from the paper is:
20+
21+
*Recent studies have demonstrated that diffusion models are capable of generating high-quality samples, but their quality heavily depends on sampling guidance techniques, such as classifier guidance (CG) and classifier-free guidance (CFG). These techniques are often not applicable in unconditional generation or in various downstream tasks such as image restoration. In this paper, we propose a novel sampling guidance, called Perturbed-Attention Guidance (PAG), which improves diffusion sample quality across both unconditional and conditional settings, achieving this without requiring additional training or the integration of external modules. PAG is designed to progressively enhance the structure of samples throughout the denoising process. It involves generating intermediate samples with degraded structure by substituting selected self-attention maps in diffusion U-Net with an identity matrix, by considering the self-attention mechanisms' ability to capture structural information, and guiding the denoising process away from these degraded samples. In both ADM and Stable Diffusion, PAG surprisingly improves sample quality in conditional and even unconditional scenarios. Moreover, PAG significantly improves the baseline performance in various downstream tasks where existing guidances such as CG or CFG cannot be fully utilized, including ControlNet with empty prompts and image restoration such as inpainting and deblurring.*
22+
23+
## StableDiffusionXLPAGPipeline
24+
[[autodoc]] StableDiffusionXLPAGPipeline
25+
- all
26+
- __call__
27+
28+
## StableDiffusionXLPAGImg2ImgPipeline
29+
[[autodoc]] StableDiffusionXLPAGImg2ImgPipeline
30+
- all
31+
- __call__
32+
33+
## StableDiffusionXLPAGInpaintPipeline
34+
[[autodoc]] StableDiffusionXLPAGInpaintPipeline
35+
- all
36+
- __call__
37+
38+
## StableDiffusionXLControlNetPAGPipeline
39+
[[autodoc]] StableDiffusionXLControlNetPAGPipeline
40+
- all
41+
- __call__
Lines changed: 295 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,295 @@
1+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
2+
3+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4+
the License. You may obtain a copy of the License at
5+
6+
http://www.apache.org/licenses/LICENSE-2.0
7+
8+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10+
specific language governing permissions and limitations under the License.
11+
-->
12+
13+
# Perturbed-Attention Guidance
14+
15+
[Perturbed-Attention Guidance (PAG)](https://ku-cvlab.github.io/Perturbed-Attention-Guidance/) is a new diffusion sampling guidance that improves sample quality across both unconditional and conditional settings, achieving this without requiring further training or the integration of external modules. PAG is designed to progressively enhance the structure of synthesized samples throughout the denoising process by considering the self-attention mechanisms' ability to capture structural information. It involves generating intermediate samples with degraded structure by substituting selected self-attention maps in diffusion U-Net with an identity matrix, and guiding the denoising process away from these degraded samples.
16+
17+
This guide will show you how to use PAG for various tasks and use cases.
18+
19+
20+
## General tasks
21+
22+
You can apply PAG to the [`StableDiffusionXLPipeline`] for tasks such as text-to-image, image-to-image, and inpainting. To enable PAG for a specific task, load the pipeline using the [AutoPipeline](../api/pipelines/auto_pipeline) API with the `enable_pag=True` flag and the `pag_applied_layers` argument.
23+
24+
> [!TIP]
25+
> πŸ€— Diffusers currently only supports using PAG with selected SDXL pipelines, but feel free to open a [feature request](https://github.com/huggingface/diffusers/issues/new/choose) if you want to add PAG support to a new pipeline!
26+
27+
<hfoptions id="tasks">
28+
<hfoption id="Text-to-image">
29+
30+
```py
31+
from diffusers import AutoPipelineForText2Image
32+
from diffusers.utils import load_image
33+
import torch
34+
35+
pipeline = AutoPipelineForText2Image.from_pretrained(
36+
"stabilityai/stable-diffusion-xl-base-1.0",
37+
enable_pag=True,
38+
pag_applied_layers=["mid"],
39+
torch_dtype=torch.float16
40+
)
41+
pipeline.enable_model_cpu_offload()
42+
```
43+
44+
> [!TIP]
45+
> The `pag_applied_layers` argument allows you to specify which layers PAG is applied to. Additionally, you can use `set_pag_applied_layers` method to update these layers after the pipeline has been created. Check out the [pag_applied_layers](#pag_applied_layers) section to learn more about applying PAG to other layers.
46+
47+
To generate an image, you will also need to pass a `pag_scale`. When `pag_scale` increases, images gain more semantically coherent structures and exhibit fewer artifacts. However overly large guidance scale can lead to smoother textures and slight saturation in the images, similarly to CFG. `pag_scale=3.0` is used in the official demo and works well in most of the use cases, but feel free to experiment and select the appropriate value according to your needs! PAG is disabled when `pag_scale=0`.
48+
49+
```py
50+
prompt = "an insect robot preparing a delicious meal, anime style"
51+
52+
for pag_scale in [0.0, 3.0]:
53+
generator = torch.Generator(device="cpu").manual_seed(0)
54+
images = pipeline(
55+
prompt=prompt,
56+
num_inference_steps=25,
57+
guidance_scale=7.0,
58+
generator=generator,
59+
pag_scale=pag_scale,
60+
).images
61+
```
62+
63+
<div class="flex flex-row gap-4">
64+
<div class="flex-1">
65+
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_0.0_cfg_7.0_mid.png"/>
66+
<figcaption class="mt-2 text-center text-sm text-gray-500">generated image without PAG</figcaption>
67+
</div>
68+
<div class="flex-1">
69+
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_3.0_cfg_7.0_mid.png"/>
70+
<figcaption class="mt-2 text-center text-sm text-gray-500">generated image with PAG</figcaption>
71+
</div>
72+
</div>
73+
74+
</hfoption>
75+
<hfoption id="Image-to-image">
76+
77+
Similary, you can use PAG with image-to-image pipelines.
78+
79+
```py
80+
from diffusers import AutoPipelineForImage2Image
81+
from diffusers.utils import load_image
82+
import torch
83+
84+
pipeline = AutoPipelineForImage2Image.from_pretrained(
85+
"stabilityai/stable-diffusion-xl-base-1.0",
86+
enable_pag=True,
87+
pag_applied_layers=["mid"],
88+
torch_dtype=torch.float16
89+
)
90+
pipeline.enable_model_cpu_offload()
91+
92+
pag_scales = 4.0
93+
guidance_scales = 7.0
94+
95+
url = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/sdxl-text2img.png"
96+
init_image = load_image(url)
97+
prompt = "a dog catching a frisbee in the jungle"
98+
99+
generator = torch.Generator(device="cpu").manual_seed(0)
100+
image = pipeline(
101+
prompt,
102+
image=init_image,
103+
strength=0.8,
104+
guidance_scale=guidance_scale,
105+
pag_scale=pag_scale,
106+
generator=generator).images[0]
107+
```
108+
109+
</hfoption>
110+
<hfoption id="Inpainting">
111+
112+
```py
113+
from diffusers import AutoPipelineForInpainting
114+
from diffusers.utils import load_image
115+
import torch
116+
117+
pipeline = AutoPipelineForInpainting.from_pretrained(
118+
"stabilityai/stable-diffusion-xl-base-1.0",
119+
enable_pag=True,
120+
torch_dtype=torch.float16
121+
)
122+
pipeline.enable_model_cpu_offload()
123+
124+
img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
125+
mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
126+
init_image = load_image(img_url).convert("RGB")
127+
mask_image = load_image(mask_url).convert("RGB")
128+
129+
prompt = "A majestic tiger sitting on a bench"
130+
131+
pag_scales = 3.0
132+
guidance_scales = 7.5
133+
134+
generator = torch.Generator(device="cpu").manual_seed(1)
135+
images = pipeline(
136+
prompt=prompt,
137+
image=init_image,
138+
mask_image=mask_image,
139+
strength=0.8,
140+
num_inference_steps=50,
141+
guidance_scale=guidance_scale,
142+
generator=generator,
143+
pag_scale=pag_scale,
144+
).images
145+
images[0]
146+
```
147+
</hfoption>
148+
</hfoptions>
149+
150+
## PAG with ControlNet
151+
152+
To use PAG with ControlNet, first create a `controlnet`. Then, pass the `controlnet` and other PAG arguments to the `from_pretrained` method of the AutoPipeline for the specified task.
153+
154+
```py
155+
from diffusers import AutoPipelineForText2Image, ControlNetModel
156+
import torch
157+
158+
controlnet = ControlNetModel.from_pretrained(
159+
"diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16
160+
)
161+
162+
pipeline = AutoPipelineForText2Image.from_pretrained(
163+
"stabilityai/stable-diffusion-xl-base-1.0",
164+
controlnet=controlnet,
165+
enable_pag=True,
166+
pag_applied_layers="mid",
167+
torch_dtype=torch.float16
168+
)
169+
pipeline.enable_model_cpu_offload()
170+
```
171+
172+
You can use the pipeline in the same way you normally use ControlNet pipelines, with the added option to specify a `pag_scale` parameter. Note that PAG works well for unconditional generation. In this example, we will generate an image without a prompt.
173+
174+
```py
175+
from diffusers.utils import load_image
176+
canny_image = load_image(
177+
"https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_control_input.png"
178+
)
179+
180+
for pag_scale in [0.0, 3.0]:
181+
generator = torch.Generator(device="cpu").manual_seed(1)
182+
images = pipeline(
183+
prompt="",
184+
controlnet_conditioning_scale=controlnet_conditioning_scale,
185+
image=canny_image,
186+
num_inference_steps=50,
187+
guidance_scale=0,
188+
generator=generator,
189+
pag_scale=pag_scale,
190+
).images
191+
images[0]
192+
```
193+
194+
<div class="flex flex-row gap-4">
195+
<div class="flex-1">
196+
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_0.0_controlnet.png"/>
197+
<figcaption class="mt-2 text-center text-sm text-gray-500">generated image without PAG</figcaption>
198+
</div>
199+
<div class="flex-1">
200+
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_3.0_controlnet.png"/>
201+
<figcaption class="mt-2 text-center text-sm text-gray-500">generated image with PAG</figcaption>
202+
</div>
203+
</div>
204+
205+
## PAG with IP-Adapter
206+
207+
[IP-Adapter](https://hf.co/papers/2308.06721) is a popular model that can be plugged into diffusion models to enable image prompting without any changes to the underlying model. You can enable PAG on a pipeline with IP-Adapter loaded.
208+
209+
```py
210+
from diffusers import AutoPipelineForText2Image
211+
from diffusers.utils import load_image
212+
from transformers import CLIPVisionModelWithProjection
213+
import torch
214+
215+
image_encoder = CLIPVisionModelWithProjection.from_pretrained(
216+
"h94/IP-Adapter",
217+
subfolder="models/image_encoder",
218+
torch_dtype=torch.float16
219+
)
220+
221+
pipeline = AutoPipelineForText2Image.from_pretrained(
222+
"stabilityai/stable-diffusion-xl-base-1.0",
223+
image_encoder=image_encoder,
224+
enable_pag=True,
225+
torch_dtype=torch.float16
226+
).to("cuda")
227+
228+
pipeline.load_ip_adapter("h94/IP-Adapter", subfolder="sdxl_models", weight_name="ip-adapter-plus_sdxl_vit-h.bin")
229+
230+
pag_scales = 5.0
231+
ip_adapter_scales = 0.8
232+
233+
image = load_image("https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/ip_adapter_diner.png")
234+
235+
pipeline.set_ip_adapter_scale(ip_adapter_scale)
236+
generator = torch.Generator(device="cpu").manual_seed(0)
237+
images = pipeline(
238+
prompt="a polar bear sitting in a chair drinking a milkshake",
239+
ip_adapter_image=image,
240+
negative_prompt="deformed, ugly, wrong proportion, low res, bad anatomy, worst quality, low quality",
241+
num_inference_steps=25,
242+
guidance_scale=3.0,
243+
generator=generator,
244+
pag_scale=pag_scale,
245+
).images
246+
images[0]
247+
248+
```
249+
250+
PAG reduces artifacts and improves the overall compposition.
251+
252+
<div class="flex flex-row gap-4">
253+
<div class="flex-1">
254+
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_0.0_ipa_0.8.png"/>
255+
<figcaption class="mt-2 text-center text-sm text-gray-500">generated image without PAG</figcaption>
256+
</div>
257+
<div class="flex-1">
258+
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_5.0_ipa_0.8.png"/>
259+
<figcaption class="mt-2 text-center text-sm text-gray-500">generated image with PAG</figcaption>
260+
</div>
261+
</div>
262+
263+
264+
## Configure parameters
265+
266+
### pag_applied_layers
267+
268+
The `pag_applied_layers` argument allows you to specify which layers PAG is applied to. By default, it applies only to the mid blocks. Changing this setting will significantly impact the output. You can use the `set_pag_applied_layers` method to adjust the PAG layers after the pipeline is created, helping you find the optimal layers for your model.
269+
270+
As an example, here is the images generated with `pag_layers = ["down.block_2"]` and `pag_layers = ["down.block_2", "up.block_1.attentions_0"]`
271+
272+
```py
273+
prompt = "an insect robot preparing a delicious meal, anime style"
274+
pipeline.set_pag_applied_layers(pag_layers)
275+
generator = torch.Generator(device="cpu").manual_seed(0)
276+
images = pipeline(
277+
prompt=prompt,
278+
num_inference_steps=25,
279+
guidance_scale=guidance_scale,
280+
generator=generator,
281+
pag_scale=pag_scale,
282+
).images
283+
images[0]
284+
```
285+
286+
<div class="flex flex-row gap-4">
287+
<div class="flex-1">
288+
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_3.0_cfg_7.0_down2_up1a0.png"/>
289+
<figcaption class="mt-2 text-center text-sm text-gray-500">down.block_2 + up.block1.attentions_0</figcaption>
290+
</div>
291+
<div class="flex-1">
292+
<img class="rounded-xl" src="https://huggingface.co/datasets/YiYiXu/testing-images/resolve/main/pag_3.0_cfg_7.0_down2.png"/>
293+
<figcaption class="mt-2 text-center text-sm text-gray-500">down.block_2</figcaption>
294+
</div>
295+
</div>

β€Žsrc/diffusers/__init__.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -311,11 +311,15 @@
311311
"StableDiffusionXLAdapterPipeline",
312312
"StableDiffusionXLControlNetImg2ImgPipeline",
313313
"StableDiffusionXLControlNetInpaintPipeline",
314+
"StableDiffusionXLControlNetPAGPipeline",
314315
"StableDiffusionXLControlNetPipeline",
315316
"StableDiffusionXLControlNetXSPipeline",
316317
"StableDiffusionXLImg2ImgPipeline",
317318
"StableDiffusionXLInpaintPipeline",
318319
"StableDiffusionXLInstructPix2PixPipeline",
320+
"StableDiffusionXLPAGImg2ImgPipeline",
321+
"StableDiffusionXLPAGInpaintPipeline",
322+
"StableDiffusionXLPAGPipeline",
319323
"StableDiffusionXLPipeline",
320324
"StableUnCLIPImg2ImgPipeline",
321325
"StableUnCLIPPipeline",
@@ -702,11 +706,15 @@
702706
StableDiffusionXLAdapterPipeline,
703707
StableDiffusionXLControlNetImg2ImgPipeline,
704708
StableDiffusionXLControlNetInpaintPipeline,
709+
StableDiffusionXLControlNetPAGPipeline,
705710
StableDiffusionXLControlNetPipeline,
706711
StableDiffusionXLControlNetXSPipeline,
707712
StableDiffusionXLImg2ImgPipeline,
708713
StableDiffusionXLInpaintPipeline,
709714
StableDiffusionXLInstructPix2PixPipeline,
715+
StableDiffusionXLPAGImg2ImgPipeline,
716+
StableDiffusionXLPAGInpaintPipeline,
717+
StableDiffusionXLPAGPipeline,
710718
StableDiffusionXLPipeline,
711719
StableUnCLIPImg2ImgPipeline,
712720
StableUnCLIPPipeline,

0 commit comments

Comments
Β (0)