Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation
- URL: http://arxiv.org/abs/2309.03549v1
- Date: Thu, 7 Sep 2023 08:12:58 GMT
- Title: Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation
- Authors: Jiaxi Gu, Shicong Wang, Haoyu Zhao, Tianyi Lu, Xing Zhang, Zuxuan Wu,
Songcen Xu, Wei Zhang, Yu-Gang Jiang, Hang Xu
- Abstract summary: We propose a framework called "Reuse and Diffuse" dubbed $textitVidRD$ to produce more frames following the frames already generated by an LDM.
We also propose a set of strategies for composing video-text data that involve diverse content from multiple existing datasets.
- Score: 92.55296042611886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inspired by the remarkable success of Latent Diffusion Models (LDMs) for
image synthesis, we study LDM for text-to-video generation, which is a
formidable challenge due to the computational and memory constraints during
both model training and inference. A single LDM is usually only capable of
generating a very limited number of video frames. Some existing works focus on
separate prediction models for generating more video frames, which suffer from
additional training cost and frame-level jittering, however. In this paper, we
propose a framework called "Reuse and Diffuse" dubbed $\textit{VidRD}$ to
produce more frames following the frames already generated by an LDM.
Conditioned on an initial video clip with a small number of frames, additional
frames are iteratively generated by reusing the original latent features and
following the previous diffusion process. Besides, for the autoencoder used for
translation between pixel space and latent space, we inject temporal layers
into its decoder and fine-tune these layers for higher temporal consistency. We
also propose a set of strategies for composing video-text data that involve
diverse content from multiple existing datasets including video datasets for
action recognition and image-text datasets. Extensive experiments show that our
method achieves good results in both quantitative and qualitative evaluations.
Our project page is available
$\href{https://anonymous0x233.github.io/ReuseAndDiffuse/}{here}$.
Related papers
- LoopAnimate: Loopable Salient Object Animation [19.761865029125524]
LoopAnimate is a novel method for generating videos with consistent start and end frames.
It achieves state-of-the-art performance in both objective metrics, such as fidelity and temporal consistency, and subjective evaluation results.
arXiv Detail & Related papers (2024-04-14T07:36:18Z) - Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization [52.63845811751936]
Video pre-training is challenging due to the modeling of its dynamics video.
In this paper, we address such limitations in video pre-training with an efficient video decomposition.
Our framework is both capable of comprehending and generating image and video content, as demonstrated by its performance across 13 multimodal benchmarks.
arXiv Detail & Related papers (2024-02-05T16:30:49Z) - FusionFrames: Efficient Architectural Aspects for Text-to-Video
Generation Pipeline [4.295130967329365]
This paper presents a new two-stage latent diffusion text-to-video generation architecture based on the text-to-image diffusion model.
The design of our model significantly reduces computational costs compared to other masked frame approaches.
We evaluate different configurations of MoVQ-based video decoding scheme to improve consistency and achieve higher PSNR, SSIM, MSE, and LPIPS scores.
arXiv Detail & Related papers (2023-11-22T00:26:15Z) - Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM
Animator [59.589919015669274]
This study focuses on zero-shot text-to-video generation considering the data- and cost-efficient.
We propose a novel Free-Bloom pipeline that harnesses large language models (LLMs) as the director to generate a semantic-coherence prompt sequence.
We also propose a series of annotative modifications to adapting LDMs in the reverse process, including joint noise sampling, step-aware attention shift, and dual-path.
arXiv Detail & Related papers (2023-09-25T19:42:16Z) - Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation [93.18163456287164]
This paper proposes a novel text-guided video-to-video translation framework to adapt image models to videos.
Our framework achieves global style and local texture temporal consistency at a low cost.
arXiv Detail & Related papers (2023-06-13T17:52:23Z) - Align your Latents: High-Resolution Video Synthesis with Latent
Diffusion Models [71.11425812806431]
Latent Diffusion Models (LDMs) enable high-quality image synthesis while avoiding excessive compute demands.
Here, we apply the LDM paradigm to high-resolution generation, a particularly resource-intensive task.
We focus on two relevant real-world applications: Simulation of in-the-wild driving data and creative content creation with text-to-video modeling.
arXiv Detail & Related papers (2023-04-18T08:30:32Z) - Towards Smooth Video Composition [59.134911550142455]
Video generation requires consistent and persistent frames with dynamic content over time.
This work investigates modeling the temporal relations for composing video with arbitrary length, from a few frames to even infinite, using generative adversarial networks (GANs)
We show that the alias-free operation for single image generation, together with adequately pre-learned knowledge, brings a smooth frame transition without compromising the per-frame quality.
arXiv Detail & Related papers (2022-12-14T18:54:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.