Open
Description
Model/Pipeline/Scheduler description
Text-to-video diffusion models enable the generation of high-quality videos given text prompts, making it easy to create diverse and individual content. However, existing approaches mostly focus on short video generation (typically 16 or 24 frames), requiring hard cuts when naively extended to the case of long video synthesis. StreamingT2V, enables autoregressive generation of long videos of 80, 240, 600, 1200 or more frames with smooth transitions. The key components are:
- A ControlNet-like module which conditions the current generation on frames extracted from the previous chunk, using a cross-attention mechanism to integrate its features into the UNet's skip residual features.
- An IP-Adapter-like module which extracts high-level scene and object features from a fixed anchor frame in the first video chunk and is mixed into the prompt embedding features before executing spatial cross-attention.
- A SDEdit-based video refinement stage with randomized chunk sampling of overlapped frames per denoising timestep.
Open source status
- The model implementation is available.
- The model weights are available (Only relevant if addition is not a scheduler).
Provide useful links for the implementation
- Code: https://github.com/Picsart-AI-Research/StreamingT2V/-
- Weights: https://huggingface.co/PAIR/StreamingT2V
- Point of Contact Author: @hpoghos