Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM
Animator
- URL: http://arxiv.org/abs/2309.14494v1
- Date: Mon, 25 Sep 2023 19:42:16 GMT
- Title: Free-Bloom: Zero-Shot Text-to-Video Generator with LLM Director and LDM
Animator
- Authors: Hanzhuo Huang, Yufan Feng, Cheng Shi, Lan Xu, Jingyi Yu, Sibei Yang
- Abstract summary: This study focuses on zero-shot text-to-video generation considering the data- and cost-efficient.
We propose a novel Free-Bloom pipeline that harnesses large language models (LLMs) as the director to generate a semantic-coherence prompt sequence.
We also propose a series of annotative modifications to adapting LDMs in the reverse process, including joint noise sampling, step-aware attention shift, and dual-path.
- Score: 59.589919015669274
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Text-to-video is a rapidly growing research area that aims to generate a
semantic, identical, and temporal coherence sequence of frames that accurately
align with the input text prompt. This study focuses on zero-shot text-to-video
generation considering the data- and cost-efficient. To generate a
semantic-coherent video, exhibiting a rich portrayal of temporal semantics such
as the whole process of flower blooming rather than a set of "moving images",
we propose a novel Free-Bloom pipeline that harnesses large language models
(LLMs) as the director to generate a semantic-coherence prompt sequence, while
pre-trained latent diffusion models (LDMs) as the animator to generate the high
fidelity frames. Furthermore, to ensure temporal and identical coherence while
maintaining semantic coherence, we propose a series of annotative modifications
to adapting LDMs in the reverse process, including joint noise sampling,
step-aware attention shift, and dual-path interpolation. Without any video data
and training requirements, Free-Bloom generates vivid and high-quality videos,
awe-inspiring in generating complex scenes with semantic meaningful frame
sequences. In addition, Free-Bloom is naturally compatible with LDMs-based
extensions.
Related papers
- FancyVideo: Towards Dynamic and Consistent Video Generation via Cross-frame Textual Guidance [47.88160253507823]
We introduce FancyVideo, an innovative video generator that improves the existing text-control mechanism.
CTGM incorporates the Temporal Information (TII), Temporal Affinity Refiner (TAR), and Temporal Feature Booster (TFB) at the beginning, middle, and end of cross-attention.
arXiv Detail & Related papers (2024-08-15T14:47:44Z) - Video-LaVIT: Unified Video-Language Pre-training with Decoupled Visual-Motional Tokenization [52.63845811751936]
Video pre-training is challenging due to the modeling of its dynamics video.
In this paper, we address such limitations in video pre-training with an efficient video decomposition.
Our framework is both capable of comprehending and generating image and video content, as demonstrated by its performance across 13 multimodal benchmarks.
arXiv Detail & Related papers (2024-02-05T16:30:49Z) - FlowZero: Zero-Shot Text-to-Video Synthesis with LLM-Driven Dynamic
Scene Syntax [72.89879499617858]
FlowZero is a framework that combines Large Language Models (LLMs) with image diffusion models to generate temporally-herent videos.
FlowZero achieves improvement in zero-shot video synthesis, generating coherent videos with vivid motion.
arXiv Detail & Related papers (2023-11-27T13:39:44Z) - Reuse and Diffuse: Iterative Denoising for Text-to-Video Generation [92.55296042611886]
We propose a framework called "Reuse and Diffuse" dubbed $textitVidRD$ to produce more frames following the frames already generated by an LDM.
We also propose a set of strategies for composing video-text data that involve diverse content from multiple existing datasets.
arXiv Detail & Related papers (2023-09-07T08:12:58Z) - Rerender A Video: Zero-Shot Text-Guided Video-to-Video Translation [93.18163456287164]
This paper proposes a novel text-guided video-to-video translation framework to adapt image models to videos.
Our framework achieves global style and local texture temporal consistency at a low cost.
arXiv Detail & Related papers (2023-06-13T17:52:23Z) - DirecT2V: Large Language Models are Frame-Level Directors for Zero-Shot
Text-to-Video Generation [37.25815760042241]
This paper introduces a new framework, dubbed DirecT2V, to generate text-to-video (T2V) videos.
We equip a diffusion model with a novel value mapping method and dual-softmax filtering, which do not require any additional training.
The experimental results validate the effectiveness of our framework in producing visually coherent and storyful videos.
arXiv Detail & Related papers (2023-05-23T17:57:09Z) - Towards Smooth Video Composition [59.134911550142455]
Video generation requires consistent and persistent frames with dynamic content over time.
This work investigates modeling the temporal relations for composing video with arbitrary length, from a few frames to even infinite, using generative adversarial networks (GANs)
We show that the alias-free operation for single image generation, together with adequately pre-learned knowledge, brings a smooth frame transition without compromising the per-frame quality.
arXiv Detail & Related papers (2022-12-14T18:54:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.