-
Notifications
You must be signed in to change notification settings - Fork 6k
[Feature] Implement tiled VAE encoding/decoding for Wan model. #11414
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Hi @a-r-r-o-w @yiyixuxu, could you please help reviewing this patch? Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I hope the inline comment makes sense to you~
@@ -677,42 +677,7 @@ def __init__( | |||
attn_scales: List[float] = [], | |||
temperal_downsample: List[bool] = [False, True, True], | |||
dropout: float = 0.0, | |||
latents_mean: List[float] = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These function parameters are not being used. They have been removed in this patch but can be added back at any time if needed.
batch_size = 2 | ||
num_frames = 9 | ||
num_channels = 3 | ||
sizes = (640, 480) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add another input because the (16, 16) tensor is too small for tiling operations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's try to reduce the size as much as possible because these tests should not cause unexpected slowdowns in the CI. While enabling tiling, you can set different tile width/height and stride, than the default 256 and 192.
(128, 128) would be good, with tile size being 96, 96
and stride being 64, 64
, or something similar/lower.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a good idea! Done.
|
||
self.assertLess( | ||
(output_without_tiling.detach().cpu().numpy() - output_with_tiling.detach().cpu().numpy()).max(), | ||
0.5, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On my machine, this value is approximately 0.404, and IIRC the average absolute value of these arrays is less than 0.01, which makes me confident that the implementation is correct at some point.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for working on this @c8ef! Most of the changes look good to me. Will have to verify the visual quality once before we merge -- I can look into that after some of the reviews here are addressed 🤗
@@ -677,42 +677,7 @@ def __init__( | |||
attn_scales: List[float] = [], | |||
temperal_downsample: List[bool] = [False, True, True], | |||
dropout: float = 0.0, | |||
latents_mean: List[float] = [ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed it might not be used in the diffusers codebase at the moment, but it is being used downstream in a few repositories (example). Removing this will break downstream so we should keep this anyway.
What you could instead do here to reduce LOC is wrap these two parameters in a non-format block and condense the list into a single line:
# fmt: off
latents_mean: List[float] = ...
latents_std: List[float] = ...
# fmt: on
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I add the parameter back~
2.8251, | ||
1.9160, | ||
], | ||
spatial_compression_ratio: int = 8, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's not add new parameter now because it will lead to unnecessary config warning
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -730,6 +695,58 @@ def __init__( | |||
base_dim, z_dim, dim_mult, num_res_blocks, attn_scales, self.temperal_upsample, dropout | |||
) | |||
|
|||
self.spatial_compression_ratio = spatial_compression_ratio |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead, let's set this attribute based on the init parameters. The same logic as used in the pipeline can be applied here.
Let's also add temporal_compression_ratio as 2 raised to power of (the number of true values in temporal downsample)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
batch_size = 2 | ||
num_frames = 9 | ||
num_channels = 3 | ||
sizes = (640, 480) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's try to reduce the size as much as possible because these tests should not cause unexpected slowdowns in the CI. While enabling tiling, you can set different tile width/height and stride, than the default 256 and 192.
(128, 128) would be good, with tile size being 96, 96
and stride being 64, 64
, or something similar/lower.
@@ -785,7 +802,11 @@ def encode( | |||
The latent representations of the encoded videos. If `return_dict` is True, a | |||
[`~models.autoencoder_kl.AutoencoderKLOutput`] is returned, otherwise a plain `tuple` is returned. | |||
""" | |||
h = self._encode(x) | |||
_, _, _, height, width = x.shape |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's make sure to support use_slicing here as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done. Test case added.
Gentle ping~ |
The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the updates after last review! Most changes lgtm but some further refactoring is needed
@@ -746,7 +817,7 @@ def _count_conv3d(model): | |||
self._enc_conv_idx = [0] | |||
self._enc_feat_map = [None] * self._enc_conv_num | |||
|
|||
def _encode(self, x: torch.Tensor) -> torch.Tensor: | |||
def vanilla_encode(self, x: torch.Tensor) -> torch.Tensor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This logic should not be a separate method. Previously, the behaviour was to perform temporal tiling by default. This was incorrectly done and should've been opt-in but we merged the Wan changes without this being addressed. So, now we need to maintain it that way by having temporal tiling being opt-in by default for both cases (whether enable_tiling
has actually been called by user or not)
The way I would rewrite this would be:
- Let
encode
/decode
handle the batch-wise slicing if enabled (your implementation is correct in this regard) - Let
_encode
/_decode
handle the spatial tiling based on whether user enabled it or not (your implementation is correct in this regard) - Instead of the
vanilla_*
methods, move the temporal tiling logic into_encode
,_decode
,tiled_encode
,tiled_decode
, where temporal tiling should be enabled by default to maintain backwards compatiblity (you can introduce two other flags calleduse_framewise_encoding
anduse_framewise_decoding
that are True by default. For when they are False, feel free to simply raise a NotImplementedError unless you'd like to give that a try too :) ref, ref2
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for your detailed guidance! I learned a lot from it and will refactor the code in the coming week.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on review feedback, I removed the vanilla_*
function and transferred the logic to _encode
and _decode
. Initially, I wanted to create something similar to temporal_tiled_decode
like Hunyuan VAE, but I discovered a small problem with the feat cache mechanism in Wan VAE when using this approach (or maybe I misunderstood something). As a result, I modified the implementation to resemble cogvideo. The latest patch also eliminates some unused variables and functions that were added in a previous patch.
@@ -764,9 +837,6 @@ def _encode(self, x: torch.Tensor) -> torch.Tensor: | |||
out = torch.cat([out, out_], 2) | |||
|
|||
enc = self.quant_conv(out) | |||
mu, logvar = enc[:, : self.z_dim, :, :, :], enc[:, self.z_dim :, :, :, :] | |||
enc = torch.cat([mu, logvar], dim=1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a no-op.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice find!
Hi @a-r-r-o-w, could you please take another look? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for addressing the reviews, really awesome work!
Visually, the code looks correct to me, but I need to verify the results before/after numerically to know for sure that there is no regression in the behaviour. Currently I'm super busy with something but I'll try to merge this after the weekend (sorry for the delay!)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the awesome work! The changes are good to merge
Here's a minimal script I used for testing:
code
import argparse
import torch
from diffusers import AutoencoderKLWan
from diffusers.video_processor import VideoProcessor
from diffusers.utils import export_to_video, load_video
@torch.no_grad()
def main(args):
height = 480
width = 768
torch.cuda.reset_peak_memory_stats()
vae = AutoencoderKLWan.from_pretrained("Wan-AI/Wan2.1-T2V-1.3B-Diffusers", subfolder="vae", torch_dtype=torch.bfloat16)
vae.to("cuda")
if args.enable_slicing:
vae.enable_slicing()
if args.enable_tiling:
vae.enable_tiling()
video_processor = VideoProcessor(vae_latent_channels=vae.config.z_dim)
video = load_video("inputs/peter-dance.mp4")[::2][:81]
batch_size = 1
if args.create_video_batch:
batch_size = 2
video = [video] * batch_size
video = video_processor.preprocess_video(video, height, width)
video = video.to("cuda", dtype=torch.bfloat16)
encoded = vae.encode(video).latent_dist.sample(generator=torch.Generator().manual_seed(42))
print(f"Encoded shape: {encoded.shape}")
print(f"Max memory (encode): {torch.cuda.max_memory_allocated() / 1024**3:.3f} GB")
torch.cuda.reset_peak_memory_stats()
decoded = vae.decode(encoded).sample
print(f"Decoded shape: {decoded.shape}")
print(f"Max memory (decode): {torch.cuda.max_memory_allocated() / 1024**3:.3f} GB")
videos = video_processor.postprocess_video(decoded, output_type="pil")
for i in range(batch_size):
export_to_video(videos[i], f"output{i}.mp4", fps=16)
def get_args():
parser = argparse.ArgumentParser()
parser.add_argument("--enable_tiling", action="store_true")
parser.add_argument("--enable_slicing", action="store_true")
parser.add_argument("--create_video_batch", action="store_true")
return parser.parse_args()
if __name__ == "__main__":
args = get_args()
main(args)
(nightly-venv) (nightly-venv) aryan@hf-dgx-01:~/work/diffusers$ python3 dump13.py
Encoded shape: torch.Size([1, 16, 21, 60, 96])
Max memory (encode): 2.831 GB
Decoded shape: torch.Size([1, 3, 81, 480, 768])
Max memory (decode): 4.451 GB
(nightly-venv) (nightly-venv) aryan@hf-dgx-01:~/work/diffusers$ python3 dump13.py --enable_tiling
Encoded shape: torch.Size([1, 16, 21, 60, 96])
Max memory (encode): 0.846 GB
Decoded shape: torch.Size([1, 3, 81, 480, 768])
Max memory (decode): 1.298 GB
(nightly-venv) (nightly-venv) aryan@hf-dgx-01:~/work/diffusers$ python3 dump13.py --enable_tiling --create_video_batch
Encoded shape: torch.Size([2, 16, 21, 60, 96])
Max memory (encode): 1.453 GB
Decoded shape: torch.Size([2, 3, 81, 480, 768])
Max memory (decode): 2.358 GB
(nightly-venv) (nightly-venv) aryan@hf-dgx-01:~/work/diffusers$ python3 dump13.py --enable_tiling --create_video_batch --enable_slicing
Encoded shape: torch.Size([2, 16, 21, 60, 96])
Max memory (encode): 1.020 GB
Decoded shape: torch.Size([2, 3, 81, 480, 768])
Max memory (decode): 1.637 GB
The results are as expected. Additionally, the outputs without any tiling/slicing on this branch match that of main. I'll merge once the tests pass after the latest suggested are applied.
Thank you for your thorough testing! I appreciate it, as these code snippets are a great starting point for me to delve deeper into the details. Thanks again! |
What does this PR do?
Implement tiled VAE encoding/decoding for Wan model.
Before submitting
documentation guidelines, and
here are tips on formatting docstrings.
Who can review?
Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.