Strumming to the Beat: Audio-Conditioned Contrastive Video Textures
- URL: http://arxiv.org/abs/2104.02687v1
- Date: Tue, 6 Apr 2021 17:24:57 GMT
- Title: Strumming to the Beat: Audio-Conditioned Contrastive Video Textures
- Authors: Medhini Narasimhan, Shiry Ginosar, Andrew Owens, Alexei A. Efros,
Trevor Darrell
- Abstract summary: We introduce a non-parametric approach for infinite video texture synthesis using a representation learned via contrastive learning.
We take inspiration from Video Textures, which showed that plausible new videos could be generated from a single one by stitching its frames together in a novel yet consistent order.
Our model outperforms baselines on human perceptual scores, can handle a diverse range of input videos, and can combine semantic and audio-visual cues in order to synthesize videos that synchronize well with an audio signal.
- Score: 112.6140796961121
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a non-parametric approach for infinite video texture synthesis
using a representation learned via contrastive learning. We take inspiration
from Video Textures, which showed that plausible new videos could be generated
from a single one by stitching its frames together in a novel yet consistent
order. This classic work, however, was constrained by its use of hand-designed
distance metrics, limiting its use to simple, repetitive videos. We draw on
recent techniques from self-supervised learning to learn this distance metric,
allowing us to compare frames in a manner that scales to more challenging
dynamics, and to condition on other data, such as audio. We learn
representations for video frames and frame-to-frame transition probabilities by
fitting a video-specific model trained using contrastive learning. To
synthesize a texture, we randomly sample frames with high transition
probabilities to generate diverse temporally smooth videos with novel sequences
and transitions. The model naturally extends to an audio-conditioned setting
without requiring any finetuning. Our model outperforms baselines on human
perceptual scores, can handle a diverse range of input videos, and can combine
semantic and audio-visual cues in order to synthesize videos that synchronize
well with an audio signal.
Related papers
- SmoothVideo: Smooth Video Synthesis with Noise Constraints on Diffusion
Models for One-shot Video Tuning [18.979299814757997]
One-shot video tuning methods produce videos marred by incoherence and inconsistency.
This paper introduces a simple yet effective noise constraint across video frames.
By applying the loss to existing one-shot video tuning methods, we significantly improve the overall consistency and smoothness of the generated videos.
arXiv Detail & Related papers (2023-11-29T11:14:43Z) - CLIPSonic: Text-to-Audio Synthesis with Unlabeled Videos and Pretrained
Language-Vision Models [50.42886595228255]
We propose to learn the desired text-audio correspondence by leveraging the visual modality as a bridge.
We train a conditional diffusion model to generate the audio track of a video, given a video frame encoded by a pretrained contrastive language-image pretraining model.
arXiv Detail & Related papers (2023-06-16T05:42:01Z) - ControlVideo: Training-free Controllable Text-to-Video Generation [117.06302461557044]
ControlVideo is a framework to enable natural and efficient text-to-video generation.
It generates both short and long videos within several minutes using one NVIDIA 2080Ti.
arXiv Detail & Related papers (2023-05-22T14:48:53Z) - Text2Video-Zero: Text-to-Image Diffusion Models are Zero-Shot Video
Generators [70.17041424896507]
Recent text-to-video generation approaches rely on computationally heavy training and require large-scale video datasets.
We propose a new task of zero-shot text-to-video generation using existing text-to-image synthesis methods.
Our method performs comparably or sometimes better than recent approaches, despite not being trained on additional video data.
arXiv Detail & Related papers (2023-03-23T17:01:59Z) - Towards Smooth Video Composition [59.134911550142455]
Video generation requires consistent and persistent frames with dynamic content over time.
This work investigates modeling the temporal relations for composing video with arbitrary length, from a few frames to even infinite, using generative adversarial networks (GANs)
We show that the alias-free operation for single image generation, together with adequately pre-learned knowledge, brings a smooth frame transition without compromising the per-frame quality.
arXiv Detail & Related papers (2022-12-14T18:54:13Z) - Show Me What and Tell Me How: Video Synthesis via Multimodal
Conditioning [36.85533835408882]
This work presents a multimodal video generation framework that benefits from text and images provided jointly or separately.
We propose a new video token trained with self-learning and an improved mask-prediction algorithm for sampling video tokens.
Our framework can incorporate various visual modalities, such as segmentation masks, drawings, and partially occluded images.
arXiv Detail & Related papers (2022-03-04T21:09:13Z) - Sound2Sight: Generating Visual Dynamics from Sound and Context [36.38300120482868]
We present Sound2Sight, a deep variational framework, that is trained to learn a per frame prior conditioned on a joint embedding of audio and past frames.
To improve the quality and coherence of the generated frames, we propose a multimodal discriminator.
Our experiments demonstrate that Sound2Sight significantly outperforms the state of the art in the generated video quality.
arXiv Detail & Related papers (2020-07-23T16:57:44Z) - Non-Adversarial Video Synthesis with Learned Priors [53.26777815740381]
We focus on the problem of generating videos from latent noise vectors, without any reference input frames.
We develop a novel approach that jointly optimize the input latent space, the weights of a recurrent neural network and a generator through non-adversarial learning.
Our approach generates superior quality videos compared to the existing state-of-the-art methods.
arXiv Detail & Related papers (2020-03-21T02:57:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.