Skip to content

Pipeline deprecations #11354

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 33 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions docs/source/en/api/attnprocessor.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ An attention processor is a class for applying different types of attention mech

## CrossFrameAttnProcessor

[[autodoc]] pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.CrossFrameAttnProcessor
[[autodoc]] pipelines.deprecated.text_to_video_synthesis.pipeline_text_to_video_zero.CrossFrameAttnProcessor

## Custom Diffusion

Expand Down Expand Up @@ -163,4 +163,4 @@ An attention processor is a class for applying different types of attention mech

## XLAFluxFlashAttnProcessor2_0

[[autodoc]] models.attention_processor.XLAFluxFlashAttnProcessor2_0
[[autodoc]] models.attention_processor.XLAFluxFlashAttnProcessor2_0
2 changes: 1 addition & 1 deletion docs/source/en/api/models/controlnet_flux.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,4 +42,4 @@ pipe = FluxControlNetPipeline.from_pretrained("black-forest-labs/FLUX.1-dev", co

## FluxControlNetOutput

[[autodoc]] models.controlnet_flux.FluxControlNetOutput
[[autodoc]] models.controlnets.FluxControlNetOutput
2 changes: 1 addition & 1 deletion docs/source/en/api/models/controlnet_sparsectrl.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,4 +43,4 @@ controlnet = SparseControlNetModel.from_pretrained("guoyww/animatediff-sparsectr

## SparseControlNetOutput

[[autodoc]] models.controlnet_sparsectrl.SparseControlNetOutput
[[autodoc]] models.controlnets.SparseControlNetOutput
2 changes: 1 addition & 1 deletion docs/source/en/api/pipelines/i2vgenxl.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,4 +55,4 @@ Sample output with I2VGenXL:
- __call__

## I2VGenXLPipelineOutput
[[autodoc]] pipelines.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput
[[autodoc]] pipelines.deprecated.i2vgen_xl.pipeline_i2vgen_xl.I2VGenXLPipelineOutput
2 changes: 1 addition & 1 deletion docs/source/en/api/pipelines/pia.md
Original file line number Diff line number Diff line change
Expand Up @@ -168,4 +168,4 @@ FreeInit is not really free - the improved quality comes at the cost of extra co

## PIAPipelineOutput

[[autodoc]] pipelines.pia.PIAPipelineOutput
[[autodoc]] pipelines.deprecated.pia.PIAPipelineOutput
2 changes: 1 addition & 1 deletion docs/source/en/api/pipelines/semantic_stable_diffusion.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,5 +31,5 @@ Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers)
- __call__

## SemanticStableDiffusionPipelineOutput
[[autodoc]] pipelines.semantic_stable_diffusion.pipeline_output.SemanticStableDiffusionPipelineOutput
[[autodoc]] pipelines.deprecated.semantic_stable_diffusion.pipeline_output.SemanticStableDiffusionPipelineOutput
- all
2 changes: 1 addition & 1 deletion docs/source/en/api/pipelines/shap_e.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,4 @@ See the [reuse components across pipelines](../../using-diffusers/loading#reuse-
- __call__

## ShapEPipelineOutput
[[autodoc]] pipelines.shap_e.pipeline_shap_e.ShapEPipelineOutput
[[autodoc]] pipelines.deprecated.shap_e.pipeline_shap_e.ShapEPipelineOutput
Original file line number Diff line number Diff line change
Expand Up @@ -35,14 +35,14 @@ Make sure to check out the Stable Diffusion [Tips](overview#tips) section to lea

## StableDiffusionLDM3DPipeline

[[autodoc]] pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.StableDiffusionLDM3DPipeline
[[autodoc]] pipelines.deprecated.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.StableDiffusionLDM3DPipeline
- all
- __call__


## LDM3DPipelineOutput

[[autodoc]] pipelines.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput
[[autodoc]] pipelines.deprecated.stable_diffusion_ldm3d.pipeline_stable_diffusion_ldm3d.LDM3DPipelineOutput
- all
- __call__

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,6 @@ Make sure to check out the Stable Diffusion [Tips](overview#tips) section to lea

## StableDiffusionSafePipelineOutput

[[autodoc]] pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput
[[autodoc]] pipelines.deprecated.stable_diffusion_safe.StableDiffusionSafePipelineOutput
- all
- __call__
2 changes: 1 addition & 1 deletion docs/source/en/api/pipelines/text_to_video.md
Original file line number Diff line number Diff line change
Expand Up @@ -194,4 +194,4 @@ Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers)
- __call__

## TextToVideoSDPipelineOutput
[[autodoc]] pipelines.text_to_video_synthesis.TextToVideoSDPipelineOutput
[[autodoc]] pipelines.deprecated.text_to_video_synthesis.TextToVideoSDPipelineOutput
2 changes: 1 addition & 1 deletion docs/source/en/api/pipelines/text_to_video_zero.md
Original file line number Diff line number Diff line change
Expand Up @@ -303,4 +303,4 @@ Make sure to check out the Schedulers [guide](../../using-diffusers/schedulers)
- __call__

## TextToVideoPipelineOutput
[[autodoc]] pipelines.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput
[[autodoc]] pipelines.deprecated.text_to_video_synthesis.pipeline_text_to_video_zero.TextToVideoPipelineOutput
2 changes: 1 addition & 1 deletion docs/source/en/api/pipelines/wuerstchen.md
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ The original codebase, as well as experimental ideas, can be found at [dome272/W

## WuerstchenPriorPipelineOutput

[[autodoc]] pipelines.wuerstchen.pipeline_wuerstchen_prior.WuerstchenPriorPipelineOutput
[[autodoc]] pipelines.deprecated.wuerstchen.pipeline_wuerstchen_prior.WuerstchenPriorPipelineOutput

## WuerstchenDecoderPipeline

Expand Down
4 changes: 2 additions & 2 deletions examples/community/pipeline_animatediff_controlnet.py
Original file line number Diff line number Diff line change
Expand Up @@ -436,7 +436,7 @@ def prepare_ip_adapter_image_embeds(
image_embeds = ip_adapter_image_embeds
return image_embeds

# Copied from diffusers.pipelines.text_to_video_synthesis/pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents
# Copied from diffusers.pipelines.deprecated.text_to_video_synthesis.pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents
def decode_latents(self, latents):
latents = 1 / self.vae.config.scaling_factor * latents

Expand Down Expand Up @@ -663,7 +663,7 @@ def check_image(self, image, prompt, prompt_embeds):
f"If image batch size is not 1, image batch size must be same as prompt batch size. image batch size: {image_batch_size}, prompt batch size: {prompt_batch_size}"
)

# Copied from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_synth.TextToVideoSDPipeline.prepare_latents
# Copied from diffusers.pipelines.deprecated.text_to_video_synthesis.pipeline_text_to_video_synth.TextToVideoSDPipeline.prepare_latents
def prepare_latents(
self, batch_size, num_channels_latents, num_frames, height, width, dtype, device, generator, latents=None
):
Expand Down
2 changes: 1 addition & 1 deletion examples/community/pipeline_animatediff_img2video.py
Original file line number Diff line number Diff line change
Expand Up @@ -553,7 +553,7 @@ def prepare_ip_adapter_image_embeds(
image_embeds = ip_adapter_image_embeds
return image_embeds

# Copied from diffusers.pipelines.text_to_video_synthesis/pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents
# Copied from diffusers.pipelines.deprecated.text_to_video_synthesis.pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents
def decode_latents(self, latents):
latents = 1 / self.vae.config.scaling_factor * latents

Expand Down
4 changes: 2 additions & 2 deletions examples/community/pipeline_animatediff_ipex.py
Original file line number Diff line number Diff line change
Expand Up @@ -425,7 +425,7 @@ def prepare_ip_adapter_image_embeds(

return image_embeds

# Copied from diffusers.pipelines.text_to_video_synthesis/pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents
# Copied from diffusers.pipelines.deprecated.text_to_video_synthesis.pipeline_text_to_video_synth.TextToVideoSDPipeline.decode_latents
def decode_latents(self, latents):
latents = 1 / self.vae.config.scaling_factor * latents

Expand Down Expand Up @@ -520,7 +520,7 @@ def check_inputs(
f"`ip_adapter_image_embeds` has to be a list of 3D or 4D tensors but is {ip_adapter_image_embeds[0].ndim}D"
)

# Copied from diffusers.pipelines.text_to_video_synthesis.pipeline_text_to_video_synth.TextToVideoSDPipeline.prepare_latents
# Copied from diffusers.pipelines.deprecated.text_to_video_synthesis.pipeline_text_to_video_synth.TextToVideoSDPipeline.prepare_latents
def prepare_latents(
self, batch_size, num_channels_latents, num_frames, height, width, dtype, device, generator, latents=None
):
Expand Down
2 changes: 1 addition & 1 deletion examples/community/pipeline_stg_cogvideox.py
Original file line number Diff line number Diff line change
Expand Up @@ -427,7 +427,7 @@ def prepare_extra_step_kwargs(self, generator, eta):
extra_step_kwargs["generator"] = generator
return extra_step_kwargs

# Copied from diffusers.pipelines.latte.pipeline_latte.LattePipeline.check_inputs
# Copied from diffusers.pipelines.deprecated.latte.pipeline_latte.LattePipeline.check_inputs
def check_inputs(
self,
prompt,
Expand Down
10 changes: 5 additions & 5 deletions examples/community/unclip_image_interpolation.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@
UNet2DConditionModel,
UNet2DModel,
)
from diffusers.pipelines.unclip import UnCLIPTextProjModel
from diffusers.pipelines.deprecated.unclip import UnCLIPTextProjModel
from diffusers.utils import logging
from diffusers.utils.torch_utils import randn_tensor

Expand Down Expand Up @@ -84,7 +84,7 @@ class UnCLIPImageInterpolationPipeline(DiffusionPipeline):
decoder_scheduler: UnCLIPScheduler
super_res_scheduler: UnCLIPScheduler

# Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline.__init__
# Copied from diffusers.pipelines.deprecated.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline.__init__
def __init__(
self,
decoder: UNet2DConditionModel,
Expand Down Expand Up @@ -113,7 +113,7 @@ def __init__(
super_res_scheduler=super_res_scheduler,
)

# Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents
# Copied from diffusers.pipelines.deprecated.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents
def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
if latents is None:
latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
Expand All @@ -125,7 +125,7 @@ def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
latents = latents * scheduler.init_noise_sigma
return latents

# Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline._encode_prompt
# Copied from diffusers.pipelines.deprecated.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline._encode_prompt
def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_free_guidance):
batch_size = len(prompt) if isinstance(prompt, list) else 1

Expand Down Expand Up @@ -189,7 +189,7 @@ def _encode_prompt(self, prompt, device, num_images_per_prompt, do_classifier_fr

return prompt_embeds, text_encoder_hidden_states, text_mask

# Copied from diffusers.pipelines.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline._encode_image
# Copied from diffusers.pipelines.deprecated.unclip.pipeline_unclip_image_variation.UnCLIPImageVariationPipeline._encode_image
def _encode_image(self, image, device, num_images_per_prompt, image_embeddings: Optional[torch.Tensor] = None):
dtype = next(self.image_encoder.parameters()).dtype

Expand Down
8 changes: 4 additions & 4 deletions examples/community/unclip_text_interpolation.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
UNet2DConditionModel,
UNet2DModel,
)
from diffusers.pipelines.unclip import UnCLIPTextProjModel
from diffusers.pipelines.deprecated.unclip import UnCLIPTextProjModel
from diffusers.utils import logging
from diffusers.utils.torch_utils import randn_tensor

Expand Down Expand Up @@ -78,7 +78,7 @@ class UnCLIPTextInterpolationPipeline(DiffusionPipeline):
decoder_scheduler: UnCLIPScheduler
super_res_scheduler: UnCLIPScheduler

# Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.__init__
# Copied from diffusers.pipelines.deprecated.unclip.pipeline_unclip.UnCLIPPipeline.__init__
def __init__(
self,
prior: PriorTransformer,
Expand Down Expand Up @@ -107,7 +107,7 @@ def __init__(
super_res_scheduler=super_res_scheduler,
)

# Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents
# Copied from diffusers.pipelines.deprecated.unclip.pipeline_unclip.UnCLIPPipeline.prepare_latents
def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
if latents is None:
latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
Expand All @@ -119,7 +119,7 @@ def prepare_latents(self, shape, dtype, device, generator, latents, scheduler):
latents = latents * scheduler.init_noise_sigma
return latents

# Copied from diffusers.pipelines.unclip.pipeline_unclip.UnCLIPPipeline._encode_prompt
# Copied from diffusers.pipelines.deprecated.unclip.pipeline_unclip.UnCLIPPipeline._encode_prompt
def _encode_prompt(
self,
prompt,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@

from diffusers import AutoPipelineForText2Image, DDPMWuerstchenScheduler, WuerstchenPriorPipeline
from diffusers.optimization import get_scheduler
from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS, WuerstchenPrior
from diffusers.pipelines.deprecated.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS, WuerstchenPrior
from diffusers.utils import check_min_version, is_wandb_available, make_image_grid
from diffusers.utils.logging import set_verbosity_error, set_verbosity_info

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@

from diffusers import AutoPipelineForText2Image, DDPMWuerstchenScheduler
from diffusers.optimization import get_scheduler
from diffusers.pipelines.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS, WuerstchenPrior
from diffusers.pipelines.deprecated.wuerstchen import DEFAULT_STAGE_C_TIMESTEPS, WuerstchenPrior
from diffusers.training_utils import EMAModel
from diffusers.utils import check_min_version, is_wandb_available, make_image_grid
from diffusers.utils.logging import set_verbosity_error, set_verbosity_info
Expand Down
2 changes: 1 addition & 1 deletion scripts/convert_kakao_brain_unclip_to_diffusers.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@

from diffusers import UnCLIPPipeline, UNet2DConditionModel, UNet2DModel
from diffusers.models.transformers.prior_transformer import PriorTransformer
from diffusers.pipelines.unclip.text_proj import UnCLIPTextProjModel
from diffusers.pipelines.deprecated.unclip.text_proj import UnCLIPTextProjModel
from diffusers.schedulers.scheduling_unclip import UnCLIPScheduler


Expand Down
2 changes: 1 addition & 1 deletion scripts/convert_shap_e_to_diffusers.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
from accelerate import load_checkpoint_and_dispatch

from diffusers.models.transformers.prior_transformer import PriorTransformer
from diffusers.pipelines.shap_e import ShapERenderer
from diffusers.pipelines.deprecated.shap_e import ShapERenderer


"""
Expand Down
2 changes: 1 addition & 1 deletion scripts/convert_stable_cascade.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
from diffusers.loaders.single_file_utils import convert_stable_cascade_unet_single_file_to_diffusers
from diffusers.models import StableCascadeUNet
from diffusers.models.modeling_utils import load_model_dict_into_meta
from diffusers.pipelines.wuerstchen import PaellaVQModel
from diffusers.pipelines.deprecated.wuerstchen import PaellaVQModel
from diffusers.utils import is_accelerate_available


Expand Down
2 changes: 1 addition & 1 deletion scripts/convert_stable_cascade_lite.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
from diffusers.loaders.single_file_utils import convert_stable_cascade_unet_single_file_to_diffusers
from diffusers.models import StableCascadeUNet
from diffusers.models.modeling_utils import load_model_dict_into_meta
from diffusers.pipelines.wuerstchen import PaellaVQModel
from diffusers.pipelines.deprecated.wuerstchen import PaellaVQModel
from diffusers.utils import is_accelerate_available


Expand Down
2 changes: 1 addition & 1 deletion scripts/convert_wuerstchen.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
WuerstchenDecoderPipeline,
WuerstchenPriorPipeline,
)
from diffusers.pipelines.wuerstchen import PaellaVQModel, WuerstchenDiffNeXt, WuerstchenPrior
from diffusers.pipelines.deprecated.wuerstchen import PaellaVQModel, WuerstchenDiffNeXt, WuerstchenPrior


model_path = "models/"
Expand Down
2 changes: 1 addition & 1 deletion src/diffusers/models/unets/unet_stable_cascade.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
from ..modeling_utils import ModelMixin


# Copied from diffusers.pipelines.wuerstchen.modeling_wuerstchen_common.WuerstchenLayerNorm with WuerstchenLayerNorm -> SDCascadeLayerNorm
# Copied from diffusers.pipelines.deprecated.wuerstchen.modeling_wuerstchen_common.WuerstchenLayerNorm with WuerstchenLayerNorm -> SDCascadeLayerNorm
class SDCascadeLayerNorm(nn.LayerNorm):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
Expand Down
Loading