SEINE: Short-to-Long Video Diffusion Model for Generative Transition and
Prediction
- URL: http://arxiv.org/abs/2310.20700v2
- Date: Mon, 6 Nov 2023 11:25:50 GMT
- Title: SEINE: Short-to-Long Video Diffusion Model for Generative Transition and
Prediction
- Authors: Xinyuan Chen, Yaohui Wang, Lingjun Zhang, Shaobin Zhuang, Xin Ma,
Jiashuo Yu, Yali Wang, Dahua Lin, Yu Qiao, Ziwei Liu
- Abstract summary: This paper presents a short-to-long video diffusion model, SEINE, that focuses on generative transition and prediction.
We propose a random-mask video diffusion model to automatically generate transitions based on textual descriptions.
Our model generates transition videos that ensure coherence and visual quality.
- Score: 93.26613503521664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently video generation has achieved substantial progress with realistic
results. Nevertheless, existing AI-generated videos are usually very short
clips ("shot-level") depicting a single scene. To deliver a coherent long video
("story-level"), it is desirable to have creative transition and prediction
effects across different clips. This paper presents a short-to-long video
diffusion model, SEINE, that focuses on generative transition and prediction.
The goal is to generate high-quality long videos with smooth and creative
transitions between scenes and varying lengths of shot-level videos.
Specifically, we propose a random-mask video diffusion model to automatically
generate transitions based on textual descriptions. By providing the images of
different scenes as inputs, combined with text-based control, our model
generates transition videos that ensure coherence and visual quality.
Furthermore, the model can be readily extended to various tasks such as
image-to-video animation and autoregressive video prediction. To conduct a
comprehensive evaluation of this new generative task, we propose three
assessing criteria for smooth and creative transition: temporal consistency,
semantic similarity, and video-text semantic alignment. Extensive experiments
validate the effectiveness of our approach over existing methods for generative
transition and prediction, enabling the creation of story-level long videos.
Project page: https://vchitect.github.io/SEINE-project/ .
Related papers
- MovieDreamer: Hierarchical Generation for Coherent Long Visual Sequence [62.72540590546812]
MovieDreamer is a novel hierarchical framework that integrates the strengths of autoregressive models with diffusion-based rendering.
We present experiments across various movie genres, demonstrating that our approach achieves superior visual and narrative quality.
arXiv Detail & Related papers (2024-07-23T17:17:05Z) - Anchored Diffusion for Video Face Reenactment [17.343307538702238]
We introduce Anchored Diffusion, a novel method for synthesizing relatively long and seamless videos.
We train our model on video sequences with random non-uniform temporal spacing and incorporate temporal information via external guidance.
During inference, we leverage the transformer architecture to modify the diffusion process, generating a batch of non-uniform sequences anchored to a common frame.
arXiv Detail & Related papers (2024-07-21T13:14:17Z) - StoryDiffusion: Consistent Self-Attention for Long-Range Image and Video Generation [117.13475564834458]
We propose a new way of self-attention calculation, termed Consistent Self-Attention.
To extend our method to long-range video generation, we introduce a novel semantic space temporal motion prediction module.
By merging these two novel components, our framework, referred to as StoryDiffusion, can describe a text-based story with consistent images or videos.
arXiv Detail & Related papers (2024-05-02T16:25:16Z) - Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization [52.63845811751936]
Video pre-training is challenging due to the modeling of its dynamics video.
In this paper, we address such limitations in video pre-training with an efficient video decomposition.
Our framework is both capable of comprehending and generating image and video content, as demonstrated by its performance across 13 multimodal benchmarks.
arXiv Detail & Related papers (2024-02-05T16:30:49Z) - MEVG: Multi-event Video Generation with Text-to-Video Models [18.06640097064693]
We introduce a novel diffusion-based video generation method, generating a video showing multiple events given multiple individual sentences from the user.
Our method does not require a large-scale video dataset since our method uses a pre-trained text-to-video generative model without a fine-tuning process.
Our proposed method is superior to other video-generative models in terms of temporal coherency of content and semantics.
arXiv Detail & Related papers (2023-12-07T06:53:25Z) - VMC: Video Motion Customization using Temporal Attention Adaption for
Text-to-Video Diffusion Models [58.93124686141781]
Video Motion Customization (VMC) is a novel one-shot tuning approach crafted to adapt temporal attention layers within video diffusion models.
Our approach introduces a novel motion distillation objective using residual vectors between consecutive frames as a motion reference.
We validate our method against state-of-the-art video generative models across diverse real-world motions and contexts.
arXiv Detail & Related papers (2023-12-01T06:50:11Z) - Video Generation Beyond a Single Clip [76.5306434379088]
Video generation models can only generate video clips that are relatively short compared with the length of real videos.
To generate long videos covering diverse content and multiple events, we propose to use additional guidance to control the video generation process.
The proposed approach is complementary to existing efforts on video generation, which focus on generating realistic video within a fixed time window.
arXiv Detail & Related papers (2023-04-15T06:17:30Z) - Show Me What and Tell Me How: Video Synthesis via Multimodal
Conditioning [36.85533835408882]
This work presents a multimodal video generation framework that benefits from text and images provided jointly or separately.
We propose a new video token trained with self-learning and an improved mask-prediction algorithm for sampling video tokens.
Our framework can incorporate various visual modalities, such as segmentation masks, drawings, and partially occluded images.
arXiv Detail & Related papers (2022-03-04T21:09:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.