Flexible Diffusion Modeling of Long Videos
- URL: http://arxiv.org/abs/2205.11495v1
- Date: Mon, 23 May 2022 17:51:48 GMT
- Title: Flexible Diffusion Modeling of Long Videos
- Authors: William Harvey, Saeid Naderiparizi, Vaden Masrani, Christian Weilbach,
Frank Wood
- Abstract summary: We introduce a generative model that can at test-time sample any arbitrary subset of video frames conditioned on any other subset.
We demonstrate improved video modeling over prior work on a number of datasets and sample temporally coherent videos over 25 minutes in length.
We additionally release a new video modeling dataset and semantically meaningful metrics based on videos generated in the CARLA self-driving car simulator.
- Score: 15.220686350342385
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a framework for video modeling based on denoising diffusion
probabilistic models that produces long-duration video completions in a variety
of realistic environments. We introduce a generative model that can at
test-time sample any arbitrary subset of video frames conditioned on any other
subset and present an architecture adapted for this purpose. Doing so allows us
to efficiently compare and optimize a variety of schedules for the order in
which frames in a long video are sampled and use selective sparse and
long-range conditioning on previously sampled frames. We demonstrate improved
video modeling over prior work on a number of datasets and sample temporally
coherent videos over 25 minutes in length. We additionally release a new video
modeling dataset and semantically meaningful metrics based on videos generated
in the CARLA self-driving car simulator.
Related papers
- Video Latent Flow Matching: Optimal Polynomial Projections for Video Interpolation and Extrapolation [11.77588746719272]
This paper considers an efficient video modeling process called Video Latent Flow Matching (VLFM)
Our method relies on current strong pre-trained image generation models, modeling a certain caption-guided flow of latent patches that can be decoded to time-dependent video frames.
We conduct experiments on several text-to-video datasets to showcase the effectiveness of our method.
arXiv Detail & Related papers (2025-02-01T17:40:11Z) - Multi-subject Open-set Personalization in Video Generation [110.02124633005516]
We present Video Alchemist $-$ a video model with built-in multi-subject, open-set personalization capabilities.
Our model is built on a new Diffusion Transformer module that fuses each conditional reference image and its corresponding subject-level text prompt.
Our method significantly outperforms existing personalization methods in both quantitative and qualitative evaluations.
arXiv Detail & Related papers (2025-01-10T18:59:54Z) - Autoregressive Video Generation without Vector Quantization [90.87907377618747]
We reformulate the video generation problem as a non-quantized autoregressive modeling of temporal frame-by-frame prediction.
With the proposed approach, we train a novel video autoregressive model without vector quantization, termed NOVA.
Our results demonstrate that NOVA surpasses prior autoregressive video models in data efficiency, inference speed, visual fidelity, and video fluency, even with a much smaller model capacity.
arXiv Detail & Related papers (2024-12-18T18:59:53Z) - Latent-Reframe: Enabling Camera Control for Video Diffusion Model without Training [51.851390459940646]
We introduce Latent-Reframe, which enables camera control in a pre-trained video diffusion model without fine-tuning.
Latent-Reframe operates during the sampling stage, maintaining efficiency while preserving the original model distribution.
Our approach reframes the latent code of video frames to align with the input camera trajectory through time-aware point clouds.
arXiv Detail & Related papers (2024-12-08T18:59:54Z) - xGen-VideoSyn-1: High-fidelity Text-to-Video Synthesis with Compressed Representations [120.52120919834988]
xGen-SynVideo-1 is a text-to-video (T2V) generation model capable of producing realistic scenes from textual descriptions.
VidVAE compresses video data both spatially and temporally, significantly reducing the length of visual tokens.
DiT model incorporates spatial and temporal self-attention layers, enabling robust generalization across different timeframes and aspect ratios.
arXiv Detail & Related papers (2024-08-22T17:55:22Z) - ZeroSmooth: Training-free Diffuser Adaptation for High Frame Rate Video Generation [81.90265212988844]
We propose a training-free video method for generative video models in a plug-and-play manner.
We transform a video model into a self-cascaded video diffusion model with the designed hidden state correction modules.
Our training-free method is even comparable to trained models supported by huge compute resources and large-scale datasets.
arXiv Detail & Related papers (2024-06-03T00:31:13Z) - Video Interpolation with Diffusion Models [54.06746595879689]
We present VIDIM, a generative model for video, which creates short videos given a start and end frame.
VIDIM uses cascaded diffusion models to first generate the target video at low resolution, and then generate the high-resolution video conditioned on the low-resolution generated video.
arXiv Detail & Related papers (2024-04-01T15:59:32Z) - SIAM: A Simple Alternating Mixer for Video Prediction [42.03590872477933]
Video predicting future frames from the previous ones has broad applications as autonomous driving and forecasting weather.
We explicitly model these features in a unified encoder-decoder framework and propose a novel simple simple (SIAM)
SIAM lies in the design of alternating mixing (Da) blocks, which can model spatial, temporal, andtemporal features.
arXiv Detail & Related papers (2023-11-20T11:28:18Z) - Learning Fine-Grained Visual Understanding for Video Question Answering
via Decoupling Spatial-Temporal Modeling [28.530765643908083]
We decouple spatial-temporal modeling and integrate an image- and a video-language to learn fine-grained visual understanding.
We propose a novel pre-training objective, Temporal Referring Modeling, which requires the model to identify temporal positions of events in video sequences.
Our model outperforms previous work pre-trained on orders of magnitude larger datasets.
arXiv Detail & Related papers (2022-10-08T07:03:31Z) - Leveraging Local Temporal Information for Multimodal Scene
Classification [9.548744259567837]
Video scene classification models should capture the spatial (pixel-wise) and temporal (frame-wise) characteristics of a video effectively.
Transformer models with self-attention which are designed to get contextualized representations for individual tokens given a sequence of tokens, are becoming increasingly popular in many computer vision tasks.
We propose a novel self-attention block that leverages both local and global temporal relationships between the video frames to obtain better contextualized representations for the individual frames.
arXiv Detail & Related papers (2021-10-26T19:58:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.