Skip to content

Commit 3515bbb

Browse files
committed
Fix typos in docs and comments
1 parent f00a995 commit 3515bbb

File tree

115 files changed

+164
-164
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

115 files changed

+164
-164
lines changed

docs/source/en/api/pipelines/animatediff.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -966,7 +966,7 @@ pipe.to("cuda")
966966
prompt = {
967967
0: "A caterpillar on a leaf, high quality, photorealistic",
968968
40: "A caterpillar transforming into a cocoon, on a leaf, near flowers, photorealistic",
969-
80: "A cocoon on a leaf, flowers in the backgrond, photorealistic",
969+
80: "A cocoon on a leaf, flowers in the background, photorealistic",
970970
120: "A cocoon maturing and a butterfly being born, flowers and leaves visible in the background, photorealistic",
971971
160: "A beautiful butterfly, vibrant colors, sitting on a leaf, flowers in the background, photorealistic",
972972
200: "A beautiful butterfly, flying away in a forest, photorealistic",

docs/source/en/api/pipelines/cogvideox.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -23,7 +23,7 @@
2323

2424
The abstract from the paper is:
2525

26-
*We introduce CogVideoX, a large-scale diffusion transformer model designed for generating videos based on text prompts. To efficently model video data, we propose to levearge a 3D Variational Autoencoder (VAE) to compresses videos along both spatial and temporal dimensions. To improve the text-video alignment, we propose an expert transformer with the expert adaptive LayerNorm to facilitate the deep fusion between the two modalities. By employing a progressive training technique, CogVideoX is adept at producing coherent, long-duration videos characterized by significant motion. In addition, we develop an effectively text-video data processing pipeline that includes various data preprocessing strategies and a video captioning method. It significantly helps enhance the performance of CogVideoX, improving both generation quality and semantic alignment. Results show that CogVideoX demonstrates state-of-the-art performance across both multiple machine metrics and human evaluations. The model weight of CogVideoX-2B is publicly available at https://github.com/THUDM/CogVideo.*
26+
*We introduce CogVideoX, a large-scale diffusion transformer model designed for generating videos based on text prompts. To efficiently model video data, we propose to levearge a 3D Variational Autoencoder (VAE) to compresses videos along both spatial and temporal dimensions. To improve the text-video alignment, we propose an expert transformer with the expert adaptive LayerNorm to facilitate the deep fusion between the two modalities. By employing a progressive training technique, CogVideoX is adept at producing coherent, long-duration videos characterized by significant motion. In addition, we develop an effectively text-video data processing pipeline that includes various data preprocessing strategies and a video captioning method. It significantly helps enhance the performance of CogVideoX, improving both generation quality and semantic alignment. Results show that CogVideoX demonstrates state-of-the-art performance across both multiple machine metrics and human evaluations. The model weight of CogVideoX-2B is publicly available at https://github.com/THUDM/CogVideo.*
2727

2828
<Tip>
2929

docs/source/en/api/pipelines/ledits_pp.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ You can find additional information about LEDITS++ on the [project page](https:/
2929
</Tip>
3030

3131
<Tip warning={true}>
32-
Due to some backward compatability issues with the current diffusers implementation of [`~schedulers.DPMSolverMultistepScheduler`] this implementation of LEdits++ can no longer guarantee perfect inversion.
32+
Due to some backward compatibility issues with the current diffusers implementation of [`~schedulers.DPMSolverMultistepScheduler`] this implementation of LEdits++ can no longer guarantee perfect inversion.
3333
This issue is unlikely to have any noticeable effects on applied use-cases. However, we provide an alternative implementation that guarantees perfect inversion in a dedicated [GitHub repo](https://github.com/ml-research/ledits_pp).
3434
</Tip>
3535

docs/source/en/api/pipelines/wan.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -285,7 +285,7 @@ pipe = WanImageToVideoPipeline.from_pretrained(
285285
image_encoder=image_encoder,
286286
torch_dtype=torch.bfloat16
287287
)
288-
# Since we've offloaded the larger models alrady, we can move the rest of the model components to GPU
288+
# Since we've offloaded the larger models already, we can move the rest of the model components to GPU
289289
pipe.to("cuda")
290290

291291
image = load_image(
@@ -368,7 +368,7 @@ pipe = WanImageToVideoPipeline.from_pretrained(
368368
image_encoder=image_encoder,
369369
torch_dtype=torch.bfloat16
370370
)
371-
# Since we've offloaded the larger models alrady, we can move the rest of the model components to GPU
371+
# Since we've offloaded the larger models already, we can move the rest of the model components to GPU
372372
pipe.to("cuda")
373373

374374
image = load_image(

docs/source/en/using-diffusers/inference_with_lcm.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -485,7 +485,7 @@ image = image[:, :, None]
485485
image = np.concatenate([image, image, image], axis=2)
486486
canny_image = Image.fromarray(image).resize((1024, 1216))
487487

488-
adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda")
488+
adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, variant="fp16").to("cuda")
489489

490490
unet = UNet2DConditionModel.from_pretrained(
491491
"latent-consistency/lcm-sdxl",
@@ -551,7 +551,7 @@ image = image[:, :, None]
551551
image = np.concatenate([image, image, image], axis=2)
552552
canny_image = Image.fromarray(image).resize((1024, 1024))
553553

554-
adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, varient="fp16").to("cuda")
554+
adapter = T2IAdapter.from_pretrained("TencentARC/t2i-adapter-canny-sdxl-1.0", torch_dtype=torch.float16, variant="fp16").to("cuda")
555555

556556
pipe = StableDiffusionXLAdapterPipeline.from_pretrained(
557557
"stabilityai/stable-diffusion-xl-base-1.0",

docs/source/en/using-diffusers/pag.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -154,11 +154,11 @@ pipeline = AutoPipelineForInpainting.from_pretrained(
154154
pipeline.enable_model_cpu_offload()
155155
```
156156

157-
You can enable PAG on an exisiting inpainting pipeline like this
157+
You can enable PAG on an existing inpainting pipeline like this
158158

159159
```py
160-
pipeline_inpaint = AutoPipelineForInpaiting.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16)
161-
pipeline = AutoPipelineForInpaiting.from_pipe(pipeline_inpaint, enable_pag=True)
160+
pipeline_inpaint = AutoPipelineForInpainting.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16)
161+
pipeline = AutoPipelineForInpainting.from_pipe(pipeline_inpaint, enable_pag=True)
162162
```
163163

164164
This still works when your pipeline has a different task:

examples/advanced_diffusion_training/README.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -125,7 +125,7 @@ Now we'll simply specify the name of the dataset and caption column (in this cas
125125
```
126126

127127
You can also load a dataset straight from by specifying it's name in `dataset_name`.
128-
Look [here](https://huggingface.co/blog/sdxl_lora_advanced_script#custom-captioning) for more info on creating/loadin your own caption dataset.
128+
Look [here](https://huggingface.co/blog/sdxl_lora_advanced_script#custom-captioning) for more info on creating/loading your own caption dataset.
129129

130130
- **optimizer**: for this example, we'll use [prodigy](https://huggingface.co/blog/sdxl_lora_advanced_script#adaptive-optimizers) - an adaptive optimizer
131131
- **pivotal tuning**
@@ -404,7 +404,7 @@ The advanced script now supports custom choice of U-net blocks to train during D
404404
> In light of this, we're introducing a new feature to the advanced script to allow for configurable U-net learned blocks.
405405
406406
**Usage**
407-
Configure LoRA learned U-net blocks adding a `lora_unet_blocks` flag, with a comma seperated string specifying the targeted blocks.
407+
Configure LoRA learned U-net blocks adding a `lora_unet_blocks` flag, with a comma separated string specifying the targeted blocks.
408408
e.g:
409409
```bash
410410
--lora_unet_blocks="unet.up_blocks.0.attentions.0,unet.up_blocks.0.attentions.1"

examples/advanced_diffusion_training/README_flux.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -141,7 +141,7 @@ Now we'll simply specify the name of the dataset and caption column (in this cas
141141
```
142142

143143
You can also load a dataset straight from by specifying it's name in `dataset_name`.
144-
Look [here](https://huggingface.co/blog/sdxl_lora_advanced_script#custom-captioning) for more info on creating/loadin your own caption dataset.
144+
Look [here](https://huggingface.co/blog/sdxl_lora_advanced_script#custom-captioning) for more info on creating/loading your own caption dataset.
145145

146146
- **optimizer**: for this example, we'll use [prodigy](https://huggingface.co/blog/sdxl_lora_advanced_script#adaptive-optimizers) - an adaptive optimizer
147147
- **pivotal tuning**

examples/amused/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
## Amused training
22

3-
Amused can be finetuned on simple datasets relatively cheaply and quickly. Using 8bit optimizers, lora, and gradient accumulation, amused can be finetuned with as little as 5.5 GB. Here are a set of examples for finetuning amused on some relatively simple datasets. These training recipies are aggressively oriented towards minimal resources and fast verification -- i.e. the batch sizes are quite low and the learning rates are quite high. For optimal quality, you will probably want to increase the batch sizes and decrease learning rates.
3+
Amused can be finetuned on simple datasets relatively cheaply and quickly. Using 8bit optimizers, lora, and gradient accumulation, amused can be finetuned with as little as 5.5 GB. Here are a set of examples for finetuning amused on some relatively simple datasets. These training recipes are aggressively oriented towards minimal resources and fast verification -- i.e. the batch sizes are quite low and the learning rates are quite high. For optimal quality, you will probably want to increase the batch sizes and decrease learning rates.
44

55
All training examples use fp16 mixed precision and gradient checkpointing. We don't show 8 bit adam + lora as its about the same memory use as just using lora (bitsandbytes uses full precision optimizer states for weights below a minimum size).
66

examples/cogvideo/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -201,7 +201,7 @@ Note that setting the `<ID_TOKEN>` is not necessary. From some limited experimen
201201
> - The original repository uses a `lora_alpha` of `1`. We found this not suitable in many runs, possibly due to difference in modeling backends and training settings. Our recommendation is to set to the `lora_alpha` to either `rank` or `rank // 2`.
202202
> - If you're training on data whose captions generate bad results with the original model, a `rank` of 64 and above is good and also the recommendation by the team behind CogVideoX. If the generations are already moderately good on your training captions, a `rank` of 16/32 should work. We found that setting the rank too low, say `4`, is not ideal and doesn't produce promising results.
203203
> - The authors of CogVideoX recommend 4000 training steps and 100 training videos overall to achieve the best result. While that might yield the best results, we found from our limited experimentation that 2000 steps and 25 videos could also be sufficient.
204-
> - When using the Prodigy opitimizer for training, one can follow the recommendations from [this](https://huggingface.co/blog/sdxl_lora_advanced_script) blog. Prodigy tends to overfit quickly. From my very limited testing, I found a learning rate of `0.5` to be suitable in addition to `--prodigy_use_bias_correction`, `prodigy_safeguard_warmup` and `--prodigy_decouple`.
204+
> - When using the Prodigy optimizer for training, one can follow the recommendations from [this](https://huggingface.co/blog/sdxl_lora_advanced_script) blog. Prodigy tends to overfit quickly. From my very limited testing, I found a learning rate of `0.5` to be suitable in addition to `--prodigy_use_bias_correction`, `prodigy_safeguard_warmup` and `--prodigy_decouple`.
205205
> - The recommended learning rate by the CogVideoX authors and from our experimentation with Adam/AdamW is between `1e-3` and `1e-4` for a dataset of 25+ videos.
206206
>
207207
> Note that our testing is not exhaustive due to limited time for exploration. Our recommendation would be to play around with the different knobs and dials to find the best settings for your data.

examples/cogvideo/train_cogvideox_image_to_video_lora.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -879,7 +879,7 @@ def prepare_rotary_positional_embeddings(
879879

880880

881881
def get_optimizer(args, params_to_optimize, use_deepspeed: bool = False):
882-
# Use DeepSpeed optimzer
882+
# Use DeepSpeed optimizer
883883
if use_deepspeed:
884884
from accelerate.utils import DummyOptim
885885

examples/cogvideo/train_cogvideox_lora.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -901,7 +901,7 @@ def prepare_rotary_positional_embeddings(
901901

902902

903903
def get_optimizer(args, params_to_optimize, use_deepspeed: bool = False):
904-
# Use DeepSpeed optimzer
904+
# Use DeepSpeed optimizer
905905
if use_deepspeed:
906906
from accelerate.utils import DummyOptim
907907

examples/community/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -4864,7 +4864,7 @@ python -m pip install intel_extension_for_pytorch
48644864
```
48654865
python -m pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
48664866
```
4867-
2. After pipeline initialization, `prepare_for_ipex()` should be called to enable IPEX accelaration. Supported inference datatypes are Float32 and BFloat16.
4867+
2. After pipeline initialization, `prepare_for_ipex()` should be called to enable IPEX acceleration. Supported inference datatypes are Float32 and BFloat16.
48684868

48694869
```python
48704870
pipe = AnimateDiffPipelineIpex.from_pretrained(base, motion_adapter=adapter, torch_dtype=dtype).to(device)

examples/community/dps_pipeline.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -336,13 +336,13 @@ def contributions(self, in_length, out_length, scale, kernel, kernel_width, anti
336336
expanded_kernel_width = np.ceil(kernel_width) + 2
337337

338338
# Determine a set of field_of_view for each each output position, these are the pixels in the input image
339-
# that the pixel in the output image 'sees'. We get a matrix whos horizontal dim is the output pixels (big) and the
339+
# that the pixel in the output image 'sees'. We get a matrix whose horizontal dim is the output pixels (big) and the
340340
# vertical dim is the pixels it 'sees' (kernel_size + 2)
341341
field_of_view = np.squeeze(
342342
np.int16(np.expand_dims(left_boundary, axis=1) + np.arange(expanded_kernel_width) - 1)
343343
)
344344

345-
# Assign weight to each pixel in the field of view. A matrix whos horizontal dim is the output pixels and the
345+
# Assign weight to each pixel in the field of view. A matrix whose horizontal dim is the output pixels and the
346346
# vertical dim is a list of weights matching to the pixel in the field of view (that are specified in
347347
# 'field_of_view')
348348
weights = fixed_kernel(1.0 * np.expand_dims(match_coordinates, axis=1) - field_of_view - 1)

examples/community/hd_painter.py

+6-6
Original file line numberDiff line numberDiff line change
@@ -201,16 +201,16 @@ def __call__(
201201
# ================================================== #
202202
# We use a hack by running the code from the BasicTransformerBlock that is between Self and Cross attentions here
203203
# The other option would've been modifying the BasicTransformerBlock and adding this functionality here.
204-
# I assumed that changing the BasicTransformerBlock would have been a bigger deal and decided to use this hack isntead.
204+
# I assumed that changing the BasicTransformerBlock would have been a bigger deal and decided to use this hack instead.
205205

206-
# The SelfAttention block recieves the normalized latents from the BasicTransformerBlock,
206+
# The SelfAttention block receives the normalized latents from the BasicTransformerBlock,
207207
# But the residual of the output is the non-normalized version.
208208
# Therefore we unnormalize the input hidden state here
209209
unnormalized_input_hidden_states = (
210210
input_hidden_states + self.transformer_block.norm1.bias
211211
) * self.transformer_block.norm1.weight
212212

213-
# TODO: return if neccessary
213+
# TODO: return if necessary
214214
# if self.use_ada_layer_norm_zero:
215215
# attn_output = gate_msa.unsqueeze(1) * attn_output
216216
# elif self.use_ada_layer_norm_single:
@@ -220,7 +220,7 @@ def __call__(
220220
if transformer_hidden_states.ndim == 4:
221221
transformer_hidden_states = transformer_hidden_states.squeeze(1)
222222

223-
# TODO: return if neccessary
223+
# TODO: return if necessary
224224
# 2.5 GLIGEN Control
225225
# if gligen_kwargs is not None:
226226
# transformer_hidden_states = self.fuser(transformer_hidden_states, gligen_kwargs["objs"])
@@ -266,7 +266,7 @@ def __call__(
266266
) = cross_attention_input_hidden_states.chunk(2)
267267

268268
# Same split for the encoder_hidden_states i.e. the tokens
269-
# Since the SelfAttention processors don't get the encoder states as input, we inject them into the processor in the begining.
269+
# Since the SelfAttention processors don't get the encoder states as input, we inject them into the processor in the beginning.
270270
_encoder_hidden_states_unconditional, encoder_hidden_states_conditional = self.encoder_hidden_states.chunk(
271271
2
272272
)
@@ -896,7 +896,7 @@ def __call__(
896896
class GaussianSmoothing(nn.Module):
897897
"""
898898
Apply gaussian smoothing on a
899-
1d, 2d or 3d tensor. Filtering is performed seperately for each channel
899+
1d, 2d or 3d tensor. Filtering is performed separately for each channel
900900
in the input using a depthwise convolution.
901901
902902
Args:

examples/community/img2img_inpainting.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -161,7 +161,7 @@ def __call__(
161161
`Image`, or tensor representing an image batch which will be inpainted, *i.e.* parts of the image will
162162
be masked out with `mask_image` and repainted according to `prompt`.
163163
inner_image (`torch.Tensor` or `PIL.Image.Image`):
164-
`Image`, or tensor representing an image batch which will be overlayed onto `image`. Non-transparent
164+
`Image`, or tensor representing an image batch which will be overlaid onto `image`. Non-transparent
165165
regions of `inner_image` must fit inside white pixels in `mask_image`. Expects four channels, with
166166
the last channel representing the alpha channel, which will be used to blend `inner_image` with
167167
`image`. If not provided, it will be forcibly cast to RGBA.

examples/community/latent_consistency_img2img.py

+2-2
Original file line numberDiff line numberDiff line change
@@ -647,7 +647,7 @@ def _threshold_sample(self, sample: torch.Tensor) -> torch.Tensor:
647647
return sample
648648

649649
def set_timesteps(
650-
self, stength, num_inference_steps: int, lcm_origin_steps: int, device: Union[str, torch.device] = None
650+
self, strength, num_inference_steps: int, lcm_origin_steps: int, device: Union[str, torch.device] = None
651651
):
652652
"""
653653
Sets the discrete timesteps used for the diffusion chain (to be run before inference).
@@ -668,7 +668,7 @@ def set_timesteps(
668668
# LCM Timesteps Setting: # Linear Spacing
669669
c = self.config.num_train_timesteps // lcm_origin_steps
670670
lcm_origin_timesteps = (
671-
np.asarray(list(range(1, int(lcm_origin_steps * stength) + 1))) * c - 1
671+
np.asarray(list(range(1, int(lcm_origin_steps * strength) + 1))) * c - 1
672672
) # LCM Training Steps Schedule
673673
skipping_step = len(lcm_origin_timesteps) // num_inference_steps
674674
timesteps = lcm_origin_timesteps[::-skipping_step][:num_inference_steps] # LCM Inference Steps Schedule

examples/community/magic_mix.py

+1-1
Original file line numberDiff line numberDiff line change
@@ -129,7 +129,7 @@ def __call__(
129129

130130
input = (
131131
(mix_factor * latents) + (1 - mix_factor) * orig_latents
132-
) # interpolating between layout noise and conditionally generated noise to preserve layout sematics
132+
) # interpolating between layout noise and conditionally generated noise to preserve layout semantics
133133
input = torch.cat([input] * 2)
134134

135135
else: # content generation phase

0 commit comments

Comments
 (0)