Fairy: Fast Parallelized Instruction-Guided Video-to-Video Synthesis
- URL: http://arxiv.org/abs/2312.13834v1
- Date: Wed, 20 Dec 2023 01:49:47 GMT
- Title: Fairy: Fast Parallelized Instruction-Guided Video-to-Video Synthesis
- Authors: Bichen Wu, Ching-Yao Chuang, Xiaoyan Wang, Yichen Jia, Kapil
Krishnakumar, Tong Xiao, Feng Liang, Licheng Yu, Peter Vajda
- Abstract summary: We introduce Fairy, a minimalist yet robust adaptation of image-editing diffusion models, enhancing them for video editing applications.
Our approach centers on the concept of anchor-based cross-frame attention, a mechanism that implicitly propagates diffusion features across frames.
A comprehensive user study, involving 1000 generated samples, confirms that our approach delivers superior quality, decisively outperforming established methods.
- Score: 51.44526084095757
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce Fairy, a minimalist yet robust adaptation of
image-editing diffusion models, enhancing them for video editing applications.
Our approach centers on the concept of anchor-based cross-frame attention, a
mechanism that implicitly propagates diffusion features across frames, ensuring
superior temporal coherence and high-fidelity synthesis. Fairy not only
addresses limitations of previous models, including memory and processing
speed. It also improves temporal consistency through a unique data augmentation
strategy. This strategy renders the model equivariant to affine transformations
in both source and target images. Remarkably efficient, Fairy generates
120-frame 512x384 videos (4-second duration at 30 FPS) in just 14 seconds,
outpacing prior works by at least 44x. A comprehensive user study, involving
1000 generated samples, confirms that our approach delivers superior quality,
decisively outperforming established methods.
Related papers
- Optical-Flow Guided Prompt Optimization for Coherent Video Generation [51.430833518070145]
We propose a framework called MotionPrompt that guides the video generation process via optical flow.
We optimize learnable token embeddings during reverse sampling steps by using gradients from a trained discriminator applied to random frame pairs.
This approach allows our method to generate visually coherent video sequences that closely reflect natural motion dynamics, without compromising the fidelity of the generated content.
arXiv Detail & Related papers (2024-11-23T12:26:52Z) - Fast and Memory-Efficient Video Diffusion Using Streamlined Inference [41.505829393818274]
Current video diffusion models exhibit demanding computational requirements and high peak memory usage.
We present Streamlined Inference, which leverages the temporal and spatial properties of video diffusion models.
Our approach significantly reduces peak memory and computational overhead, making it feasible to generate high-quality videos on a single consumer GPU.
arXiv Detail & Related papers (2024-11-02T07:52:18Z) - ViBiDSampler: Enhancing Video Interpolation Using Bidirectional Diffusion Sampler [53.98558445900626]
Current image-to-video diffusion models, while powerful in generating videos from a single frame, need adaptation for two-frame conditioned generation.
We introduce a novel, bidirectional sampling strategy to address these off-manifold issues without requiring extensive re-noising or fine-tuning.
Our method employs sequential sampling along both forward and backward paths, conditioned on the start and end frames, respectively, ensuring more coherent and on-manifold generation of intermediate frames.
arXiv Detail & Related papers (2024-10-08T03:01:54Z) - Upscale-A-Video: Temporal-Consistent Diffusion Model for Real-World
Video Super-Resolution [65.91317390645163]
Upscale-A-Video is a text-guided latent diffusion framework for video upscaling.
It ensures temporal coherence through two key mechanisms: locally, it integrates temporal layers into U-Net and VAE-Decoder, maintaining consistency within short sequences.
It also offers greater flexibility by allowing text prompts to guide texture creation and adjustable noise levels to balance restoration and generation.
arXiv Detail & Related papers (2023-12-11T18:54:52Z) - Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation [93.18163456287164]
This paper proposes a novel text-guided video-to-video translation framework to adapt image models to videos.
Our framework achieves global style and local texture temporal consistency at a low cost.
arXiv Detail & Related papers (2023-06-13T17:52:23Z) - Streaming Radiance Fields for 3D Video Synthesis [32.856346090347174]
We present an explicit-grid based method for reconstructing streaming radiance fields for novel view synthesis of real world dynamic scenes.
Experiments on challenging video sequences demonstrate that our approach is capable of achieving a training speed of 15 seconds per-frame with competitive rendering quality.
arXiv Detail & Related papers (2022-10-26T16:23:02Z) - OCSampler: Compressing Videos to One Clip with Single-step Sampling [82.0417131211353]
We propose a framework named OCSampler to explore a compact yet effective video representation with one short clip.
Our basic motivation is that the efficient video recognition task lies in processing a whole sequence at once rather than picking up frames sequentially.
arXiv Detail & Related papers (2022-01-12T09:50:38Z) - Robust High-Resolution Video Matting with Temporal Guidance [14.9739044990367]
We introduce a robust, real-time, high-resolution human video matting method that achieves new state-of-the-art performance.
Our method is much lighter than previous approaches and can process 4K at 76 FPS and HD at 104 FPS on an Nvidia GTX 1080Ti GPU.
arXiv Detail & Related papers (2021-08-25T23:48:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.