VideoElevator: Elevating Video Generation Quality with Versatile
Text-to-Image Diffusion Models
- URL: http://arxiv.org/abs/2403.05438v1
- Date: Fri, 8 Mar 2024 16:44:54 GMT
- Title: VideoElevator: Elevating Video Generation Quality with Versatile
Text-to-Image Diffusion Models
- Authors: Yabo Zhang, Yuxiang Wei, Xianhui Lin, Zheng Hui, Peiran Ren, Xuansong
Xie, Xiangyang Ji, Wangmeng Zuo
- Abstract summary: Text-to-video diffusion models (T2V) still lag far behind in frame quality and text alignment.
We introduce VideoElevator, a training-free and plug-and-play method, which elevates the performance of T2V using superior capabilities of T2I.
- Score: 94.25084162939488
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text-to-image diffusion models (T2I) have demonstrated unprecedented
capabilities in creating realistic and aesthetic images. On the contrary,
text-to-video diffusion models (T2V) still lag far behind in frame quality and
text alignment, owing to insufficient quality and quantity of training videos.
In this paper, we introduce VideoElevator, a training-free and plug-and-play
method, which elevates the performance of T2V using superior capabilities of
T2I. Different from conventional T2V sampling (i.e., temporal and spatial
modeling), VideoElevator explicitly decomposes each sampling step into temporal
motion refining and spatial quality elevating. Specifically, temporal motion
refining uses encapsulated T2V to enhance temporal consistency, followed by
inverting to the noise distribution required by T2I. Then, spatial quality
elevating harnesses inflated T2I to directly predict less noisy latent, adding
more photo-realistic details. We have conducted experiments in extensive
prompts under the combination of various T2V and T2I. The results show that
VideoElevator not only improves the performance of T2V baselines with
foundational T2I, but also facilitates stylistic video synthesis with
personalized T2I. Our code is available at
https://github.com/YBYBZhang/VideoElevator.
Related papers
- Step-Video-T2V Technical Report: The Practice, Challenges, and Future of Video Foundation Model [133.01510927611452]
We present Step-Video-T2V, a text-to-video pre-trained model with 30Bational parameters and the ability to generate videos up to 204 frames in length.
A deep compression Vari Autoencoder, Video-VAE, is designed for video generation tasks, achieving 16x16 spatial and 8x temporal compression ratios.
Step-Video-T2V's performance is evaluated on a novel video generation benchmark, Step-Video-T2V-Eval, demonstrating its state-of-the-art text-to-video quality.
arXiv Detail & Related papers (2025-02-14T15:58:10Z) - PhyT2V: LLM-Guided Iterative Self-Refinement for Physics-Grounded Text-to-Video Generation [4.98706730396778]
We present PhyT2V, a new data-independent T2V technique that expands the current T2V model's capability of video generation to out-of-distribution domains.
Our experiments show that PhyT2V improves existing T2V models' adherence to real-world physical rules by 2.3x, and achieves 35% improvement compared to T2V prompt enhancers.
arXiv Detail & Related papers (2024-11-30T22:02:12Z) - Still-Moving: Customized Video Generation without Customized Video Data [81.09302547183155]
We introduce Still-Moving, a novel framework for customizing a text-to-video (T2V) model.
The framework applies to the prominent T2V design where the video model is built over a text-to-image (T2I) model.
We train lightweight $textitSpatial Adapters$ that adjust the features produced by the injected T2I layers.
arXiv Detail & Related papers (2024-07-11T17:06:53Z) - Fuse Your Latents: Video Editing with Multi-source Latent Diffusion Models [66.12367865049572]
Latent Diffusion Models (LDMs) are renowned for their powerful capabilities in image and video synthesis.
We propose FLDM, a framework that achieves high-quality text-to-video (T2V) editing by integrating various T2I and T2V LDMs.
This paper is the first to reveal that T2I and T2V LDMs can complement each other in terms of structure and temporal consistency.
arXiv Detail & Related papers (2023-10-25T06:35:01Z) - SimDA: Simple Diffusion Adapter for Efficient Video Generation [102.90154301044095]
We propose a Simple Diffusion Adapter (SimDA) that fine-tunes only 24M out of 1.1B parameters of a strong T2I model, adapting it to video generation in a parameter-efficient way.
In addition to T2V generation in the wild, SimDA could also be utilized in one-shot video editing with only 2 minutes tuning.
arXiv Detail & Related papers (2023-08-18T17:58:44Z) - Tune-A-Video: One-Shot Tuning of Image Diffusion Models for
Text-to-Video Generation [31.882356164068753]
To reproduce the success of text-to-image (T2I) generation, recent works in text-to-video (T2V) generation employ massive dataset for dataset for T2V generation.
We propose Tune-A-Video is capable of producing temporally-coherent videos over various applications.
arXiv Detail & Related papers (2022-12-22T09:43:36Z) - Make-A-Video: Text-to-Video Generation without Text-Video Data [69.20996352229422]
Make-A-Video is an approach for translating the tremendous recent progress in Text-to-Image (T2I) generation to Text-to-Video (T2V)
We design a simple yet effective way to build on T2I models with novel and effective spatial-temporal modules.
In all aspects, spatial and temporal resolution, faithfulness to text, and quality, Make-A-Video sets the new state-of-the-art in text-to-video generation.
arXiv Detail & Related papers (2022-09-29T13:59:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.