Make-A-Video: Text-to-Video Generation without Text-Video Data
- URL: http://arxiv.org/abs/2209.14792v1
- Date: Thu, 29 Sep 2022 13:59:46 GMT
- Title: Make-A-Video: Text-to-Video Generation without Text-Video Data
- Authors: Uriel Singer, Adam Polyak, Thomas Hayes, Xi Yin, Jie An, Songyang
Zhang, Qiyuan Hu, Harry Yang, Oron Ashual, Oran Gafni, Devi Parikh, Sonal
Gupta, Yaniv Taigman
- Abstract summary: Make-A-Video is an approach for translating the tremendous recent progress in Text-to-Image (T2I) generation to Text-to-Video (T2V)
We design a simple yet effective way to build on T2I models with novel and effective spatial-temporal modules.
In all aspects, spatial and temporal resolution, faithfulness to text, and quality, Make-A-Video sets the new state-of-the-art in text-to-video generation.
- Score: 69.20996352229422
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose Make-A-Video -- an approach for directly translating the
tremendous recent progress in Text-to-Image (T2I) generation to Text-to-Video
(T2V). Our intuition is simple: learn what the world looks like and how it is
described from paired text-image data, and learn how the world moves from
unsupervised video footage. Make-A-Video has three advantages: (1) it
accelerates training of the T2V model (it does not need to learn visual and
multimodal representations from scratch), (2) it does not require paired
text-video data, and (3) the generated videos inherit the vastness (diversity
in aesthetic, fantastical depictions, etc.) of today's image generation models.
We design a simple yet effective way to build on T2I models with novel and
effective spatial-temporal modules. First, we decompose the full temporal U-Net
and attention tensors and approximate them in space and time. Second, we design
a spatial temporal pipeline to generate high resolution and frame rate videos
with a video decoder, interpolation model and two super resolution models that
can enable various applications besides T2V. In all aspects, spatial and
temporal resolution, faithfulness to text, and quality, Make-A-Video sets the
new state-of-the-art in text-to-video generation, as determined by both
qualitative and quantitative measures.
Related papers
- VideoTetris: Towards Compositional Text-to-Video Generation [45.395598467837374]
VideoTetris is a framework that enables compositional T2V generation.
We show that VideoTetris achieves impressive qualitative and quantitative results in T2V generation.
arXiv Detail & Related papers (2024-06-06T17:25:33Z) - TALC: Time-Aligned Captions for Multi-Scene Text-to-Video Generation [72.25642183446102]
We introduce Time-Aligned Captions (TALC) framework to generate multi-scene videos.
Specifically, we enhance the text-conditioning mechanism in the T2V architecture to recognize the temporal alignment between the video scenes and scene descriptions.
Our TALC-finetuned model outperforms the baseline methods on multi-scene video-text data by 15.5 points on aggregated score.
arXiv Detail & Related papers (2024-05-07T21:52:39Z) - Grid Diffusion Models for Text-to-Video Generation [2.531998650341267]
Most existing video generation methods use either a 3D U-Net architecture that considers the temporal dimension or autoregressive generation.
We propose a simple but effective novel grid diffusion for text-to-video generation without temporal dimension in architecture and a large text-video paired dataset.
Our proposed method outperforms the existing methods in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2024-03-30T03:50:43Z) - Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization [52.63845811751936]
Video pre-training is challenging due to the modeling of its dynamics video.
In this paper, we address such limitations in video pre-training with an efficient video decomposition.
Our framework is both capable of comprehending and generating image and video content, as demonstrated by its performance across 13 multimodal benchmarks.
arXiv Detail & Related papers (2024-02-05T16:30:49Z) - LAMP: Learn A Motion Pattern for Few-Shot-Based Video Generation [44.220329202024494]
We present a few-shot-based tuning framework, LAMP, which enables text-to-image diffusion model Learn A specific Motion Pattern with 816 videos on a single GPU.
Specifically, we design a first-frame-conditioned pipeline that uses an off-the-shelf text-to-image model for content generation.
To capture the features of temporal dimension, we expand the pretrained 2D convolution layers of the T2I model to our novel temporal-spatial motion learning layers.
arXiv Detail & Related papers (2023-10-16T19:03:19Z) - SimDA: Simple Diffusion Adapter for Efficient Video Generation [102.90154301044095]
We propose a Simple Diffusion Adapter (SimDA) that fine-tunes only 24M out of 1.1B parameters of a strong T2I model, adapting it to video generation in a parameter-efficient way.
In addition to T2V generation in the wild, SimDA could also be utilized in one-shot video editing with only 2 minutes tuning.
arXiv Detail & Related papers (2023-08-18T17:58:44Z) - Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video
Generators [70.17041424896507]
Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets.
We propose a new task of zero-shot text-to-video generation using existing text-to-image synthesis methods.
Our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data.
arXiv Detail & Related papers (2023-03-23T17:01:59Z) - Tune-A-Video: One-Shot Tuning of Image Diffusion Models for
Text-to-Video Generation [31.882356164068753]
To reproduce the success of text-to-image (T2I) generation, recent works in text-to-video (T2V) generation employ massive dataset for dataset for T2V generation.
We propose Tune-A-Video is capable of producing temporally-coherent videos over various applications.
arXiv Detail & Related papers (2022-12-22T09:43:36Z) - Video Generation from Text Employing Latent Path Construction for
Temporal Modeling [70.06508219998778]
Video generation is one of the most challenging tasks in Machine Learning and Computer Vision fields of study.
In this paper, we tackle the text to video generation problem, which is a conditional form of video generation.
We believe that video generation from natural language sentences will have an important impact on Artificial Intelligence.
arXiv Detail & Related papers (2021-07-29T06:28:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.