Skip to content

[2064]: Add Karras to DPMSolverMultistepScheduler #3001

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Apr 12, 2023

Conversation

nipunjindal
Copy link
Contributor

@nipunjindal nipunjindal commented Apr 6, 2023

Discussion: #2064

Adding karras sigmas schedule to DPMSolverMultistepScheduler based on PR.

As I mentioned earlier in #2064, the sample_dpmp_sde is not equivalent to DPM Multistep + karras which is something I confirmed after running the test. There is another bug in the kdiffusion code where BrownianTreeNoiseSampler is not being passed the seed, so the run for sample_dpmp_sde gives back different results each time, fixed after giving the seed though.

I believe we can still merge this PR as it's needed but the #2064 needs a new scheduler.
Let me know your thoughts.

DPM Multistep + Karras Sigma

import diffusers
import torch
from diffusers import DPMSolverMultistepScheduler

seed = 33

pipe = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
pipe = pipe.to("cuda")
pipe.scheduler = DPMSolverMultistepScheduler.from_config(
    pipe.scheduler.config, use_karras_sigmas=True
)

prompt = "an astronaut riding a horse on mars"

generator = torch.Generator(device="cuda").manual_seed(seed)
image = pipe(prompt, generator=generator, num_inference_steps=20).images[0]

display(image)

download (1)

import torch
from diffusers import StableDiffusionKDiffusionPipeline

seed = 33

pipe = StableDiffusionKDiffusionPipeline.from_pretrained(
    "CompVis/stable-diffusion-v1-4"
).to("cuda")
pipe.set_scheduler("sample_dpmpp_2m")

prompt = "an astronaut riding a horse on mars"

generator = torch.Generator(device="cuda").manual_seed(seed)
image = pipe(
    prompt, generator=generator, num_inference_steps=20, use_karras_sigmas=True
).images[0]

display(image)

download (2)

@HuggingFaceDocBuilderDev
Copy link

HuggingFaceDocBuilderDev commented Apr 6, 2023

The documentation is not available anymore as the PR was closed or merged.

@nipunjindal
Copy link
Contributor Author

@patrickvonplaten @sayakpaul Would love to get your review and thoughts related to #2064. 🙏

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @nipunjindal! Thanks a lot for adding in this one.

Maybe, let's also add a test?

@sayakpaul
Copy link
Member

I believe we can still merge this PR as it's needed but the #2064 needs a new scheduler.

Also, could you elaborate the second part of ^ a bit?

@nipunjindal
Copy link
Contributor Author

I believe we can still merge this PR as it's needed but the #2064 needs a new scheduler.

Also, could you elaborate the second part of ^ a bit?

Certainly. In response to this message, I believe that we need to introduce a new scheduler in diffusers that currently does not exist. I have previously mentioned this in my earlier message. I would like to confirm that you are in agreement with this proposal.

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me - let's maybe just make sure that the copy-from mechanism works correctly

@@ -171,7 +171,6 @@ def __init__(
self.model_outputs = [None] * solver_order
self.lower_order_nums = 0

# Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler.set_timesteps
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because it's no longer the same (as we added use_karras_sigmas). So, in a separate PR, we can add support for Karras sigmans to this scheduler too.

Cc: @patrickvonplaten

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense!

Copy link
Contributor

@patrickvonplaten patrickvonplaten left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good to go for me once the tests are green :-)

@@ -171,7 +171,6 @@ def __init__(
self.model_outputs = [None] * solver_order
self.lower_order_nums = 0

# Copied from diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler.set_timesteps
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense!

@nipunjindal
Copy link
Contributor Author

nipunjindal commented Apr 12, 2023

Sorry for being late to add the copied comment change. Thanks @sayakpaul for stepping in and helping. 🙏

@sayakpaul
Copy link
Member

Sorry for being late to add the copied comment change. Thanks @sayakpaul for stepping in and helping. 🙏

It's no problem really. I was going to ping you. I hope it's fine I went into the PR a bit to make it fully ready to 🚀 We'd love to include this in the release we're planning.

@sayakpaul
Copy link
Member

@nipunjindal the test is failing:

FAILED tests/schedulers/test_scheduler_dpm_multi.py::DPMSolverMultistepSchedulerTest::test_full_loop_with_karras_and_v_prediction - TypeError: 'NoneType' object cannot be interpreted as an integer

Possible for you to take a look?

@nipunjindal
Copy link
Contributor Author

@sayakpaul Yes, I'll take a look and revert back.

@nipunjindal
Copy link
Contributor Author

There seems to be an issue on the GitHub side as I'm unable to push or pull. Running ssh -vvvv, there are no errors shown, and I have attempted to resolve the problem by changing the SSH key to no avail. Here is the necessary modification that needs to be made due to the recent code change for num_inference_steps == num_timesteps, which led to inference_steps not being defined. While I investigate the issue with the GitHub connection, we can apply the patch below.

commit a8e05e84b8da0d61b55e00e3b73903c1b495f181
Author: njindal <[email protected]>
Date:   Wed Apr 12 15:25:33 2023 +0530

    Test Fix

diff --git a/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py b/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
index ef0aed55..3399ee2c 100644
--- a/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
+++ b/src/diffusers/schedulers/scheduling_dpmsolver_multistep.py
@@ -207,7 +207,7 @@ class DPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin):
         if self.use_karras_sigmas:
             sigmas = np.array(((1 - self.alphas_cumprod) / self.alphas_cumprod) ** 0.5)
             log_sigmas = np.log(sigmas)
-            sigmas = self._convert_to_karras(in_sigmas=sigmas)
+            sigmas = self._convert_to_karras(in_sigmas=sigmas, num_inference_steps=num_inference_steps)
             timesteps = np.array([self._sigma_to_t(sigma, log_sigmas) for sigma in sigmas]).round()
             timesteps = np.flip(timesteps).copy().astype(np.int64)
 
@@ -285,14 +285,14 @@ class DPMSolverMultistepScheduler(SchedulerMixin, ConfigMixin):
         return t
 
     # Copied from diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler._convert_to_karras
-    def _convert_to_karras(self, in_sigmas: torch.FloatTensor) -> torch.FloatTensor:
+    def _convert_to_karras(self, in_sigmas: torch.FloatTensor, num_inference_steps) -> torch.FloatTensor:
         """Constructs the noise schedule of Karras et al. (2022)."""
 
         sigma_min: float = in_sigmas[-1].item()
         sigma_max: float = in_sigmas[0].item()
 
         rho = 7.0  # 7.0 is the value used in the paper
-        ramp = np.linspace(0, 1, self.num_inference_steps)
+        ramp = np.linspace(0, 1, num_inference_steps)
         min_inv_rho = sigma_min ** (1 / rho)
         max_inv_rho = sigma_max ** (1 / rho)
         sigmas = (max_inv_rho + ramp * (min_inv_rho - max_inv_rho)) ** rho
diff --git a/src/diffusers/schedulers/scheduling_euler_discrete.py b/src/diffusers/schedulers/scheduling_euler_discrete.py
index eea1d14e..7237128c 100644
--- a/src/diffusers/schedulers/scheduling_euler_discrete.py
+++ b/src/diffusers/schedulers/scheduling_euler_discrete.py
@@ -206,7 +206,7 @@ class EulerDiscreteScheduler(SchedulerMixin, ConfigMixin):
             )
 
         if self.use_karras_sigmas:
-            sigmas = self._convert_to_karras(in_sigmas=sigmas)
+            sigmas = self._convert_to_karras(in_sigmas=sigmas, num_inference_steps=self.num_inference_steps)
             timesteps = np.array([self._sigma_to_t(sigma, log_sigmas) for sigma in sigmas])
 
         sigmas = np.concatenate([sigmas, [0.0]]).astype(np.float32)
@@ -241,14 +241,14 @@ class EulerDiscreteScheduler(SchedulerMixin, ConfigMixin):
         return t
 
     # Copied from https://github.com/crowsonkb/k-diffusion/blob/686dbad0f39640ea25c8a8c6a6e56bb40eacefa2/k_diffusion/sampling.py#L17
-    def _convert_to_karras(self, in_sigmas: torch.FloatTensor) -> torch.FloatTensor:
+    def _convert_to_karras(self, in_sigmas: torch.FloatTensor, num_inference_steps) -> torch.FloatTensor:
         """Constructs the noise schedule of Karras et al. (2022)."""
 
         sigma_min: float = in_sigmas[-1].item()
         sigma_max: float = in_sigmas[0].item()
 
         rho = 7.0  # 7.0 is the value used in the paper
-        ramp = np.linspace(0, 1, self.num_inference_steps)
+        ramp = np.linspace(0, 1, num_inference_steps)
         min_inv_rho = sigma_min ** (1 / rho)
         max_inv_rho = sigma_max ** (1 / rho)
         sigmas = (max_inv_rho + ramp * (min_inv_rho - max_inv_rho)) ** rho

sayakpaul and others added 2 commits April 12, 2023 15:58
@sayakpaul
Copy link
Member

Sorry, I think I probably messed up with the co-authorship. Apologies.

@nipunjindal
Copy link
Contributor Author

These look unrelated error:

FAILED tests/pipelines/stable_diffusion/test_stable_diffusion.py::StableDiffusionPipelineFastTests::test_stable_diffusion_height_width_opt - Failed: Timeout >60.0s
FAILED tests/pipelines/stable_unclip/test_stable_unclip_img2img.py::StableUnCLIPImg2ImgPipelineFastTests::test_inference_batch_consistent - Failed: Timeout >60.0s
==== 2 failed, 1020 passed, 723 skipped, 391 warnings in 783.04s (0:13:03) =====

@sayakpaul
Copy link
Member

I think that is unrelated and okay to merge :)

@sayakpaul sayakpaul merged commit 524535b into huggingface:main Apr 12, 2023
w4ffl35 pushed a commit to w4ffl35/diffusers that referenced this pull request Apr 14, 2023
* [2737]: Add Karras DPMSolverMultistepScheduler

* [2737]: Add Karras DPMSolverMultistepScheduler

* Add test

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <[email protected]>

* fix: repo consistency.

* remove Copied from statement from the set_timestep method.

* fix: test

* Empty commit.

Co-authored-by: njindal <[email protected]>

---------

Co-authored-by: njindal <[email protected]>
Co-authored-by: Patrick von Platen <[email protected]>
Co-authored-by: Sayak Paul <[email protected]>
dg845 pushed a commit to dg845/diffusers that referenced this pull request May 6, 2023
* [2737]: Add Karras DPMSolverMultistepScheduler

* [2737]: Add Karras DPMSolverMultistepScheduler

* Add test

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <[email protected]>

* fix: repo consistency.

* remove Copied from statement from the set_timestep method.

* fix: test

* Empty commit.

Co-authored-by: njindal <[email protected]>

---------

Co-authored-by: njindal <[email protected]>
Co-authored-by: Patrick von Platen <[email protected]>
Co-authored-by: Sayak Paul <[email protected]>
yoonseokjin pushed a commit to yoonseokjin/diffusers that referenced this pull request Dec 25, 2023
* [2737]: Add Karras DPMSolverMultistepScheduler

* [2737]: Add Karras DPMSolverMultistepScheduler

* Add test

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <[email protected]>

* fix: repo consistency.

* remove Copied from statement from the set_timestep method.

* fix: test

* Empty commit.

Co-authored-by: njindal <[email protected]>

---------

Co-authored-by: njindal <[email protected]>
Co-authored-by: Patrick von Platen <[email protected]>
Co-authored-by: Sayak Paul <[email protected]>
AmericanPresidentJimmyCarter pushed a commit to AmericanPresidentJimmyCarter/diffusers that referenced this pull request Apr 26, 2024
* [2737]: Add Karras DPMSolverMultistepScheduler

* [2737]: Add Karras DPMSolverMultistepScheduler

* Add test

* Apply suggestions from code review

Co-authored-by: Patrick von Platen <[email protected]>

* fix: repo consistency.

* remove Copied from statement from the set_timestep method.

* fix: test

* Empty commit.

Co-authored-by: njindal <[email protected]>

---------

Co-authored-by: njindal <[email protected]>
Co-authored-by: Patrick von Platen <[email protected]>
Co-authored-by: Sayak Paul <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants