Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model
Adaptation
- URL: http://arxiv.org/abs/2309.16429v1
- Date: Thu, 28 Sep 2023 13:26:26 GMT
- Title: Diverse and Aligned Audio-to-Video Generation via Text-to-Video Model
Adaptation
- Authors: Guy Yariv, Itai Gat, Sagie Benaim, Lior Wolf, Idan Schwartz, Yossi Adi
- Abstract summary: We consider the task of generating diverse and realistic videos guided by natural audio samples from a wide variety of semantic classes.
We utilize an existing text-conditioned video generation model and a pre-trained audio encoder model.
We validate our method extensively on three datasets demonstrating significant semantic diversity of audio-video samples.
- Score: 89.96013329530484
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the task of generating diverse and realistic videos guided by
natural audio samples from a wide variety of semantic classes. For this task,
the videos are required to be aligned both globally and temporally with the
input audio: globally, the input audio is semantically associated with the
entire output video, and temporally, each segment of the input audio is
associated with a corresponding segment of that video. We utilize an existing
text-conditioned video generation model and a pre-trained audio encoder model.
The proposed method is based on a lightweight adaptor network, which learns to
map the audio-based representation to the input representation expected by the
text-to-video generation model. As such, it also enables video generation
conditioned on text, audio, and, for the first time as far as we can ascertain,
on both text and audio. We validate our method extensively on three datasets
demonstrating significant semantic diversity of audio-video samples and further
propose a novel evaluation metric (AV-Align) to assess the alignment of
generated videos with input audio samples. AV-Align is based on the detection
and comparison of energy peaks in both modalities. In comparison to recent
state-of-the-art approaches, our method generates videos that are better
aligned with the input sound, both with respect to content and temporal axis.
We also show that videos produced by our method present higher visual quality
and are more diverse.
Related papers
- Tell What You Hear From What You See -- Video to Audio Generation Through Text [17.95017332858846]
VATT is a multi-modal generative framework that takes a video and an optional text prompt as input, and generates audio and optional textual description of the audio.
VATT enables controllable video-to-audio generation through text as well as suggesting text prompts for videos through audio captions.
arXiv Detail & Related papers (2024-11-08T16:29:07Z) - Auto-ACD: A Large-scale Dataset for Audio-Language Representation Learning [50.28566759231076]
We propose an innovative, automatic approach to establish an audio dataset with high-quality captions.
Specifically, we construct a large-scale, high-quality, audio-language dataset, named as Auto-ACD, comprising over 1.5M audio-text pairs.
We employ LLM to paraphrase a congruent caption for each audio, guided by the extracted multi-modality clues.
arXiv Detail & Related papers (2023-09-20T17:59:32Z) - Align, Adapt and Inject: Sound-guided Unified Image Generation [50.34667929051005]
We propose a unified framework 'Align, Adapt, and Inject' (AAI) for sound-guided image generation, editing, and stylization.
Our method adapts input sound into a sound token, like an ordinary word, which can plug and play with existing Text-to-Image (T2I) models.
Our proposed AAI outperforms other text and sound-guided state-of-the-art methods.
arXiv Detail & Related papers (2023-06-20T12:50:49Z) - CLIPSonic: Text-to-Audio Synthesis with Unlabeled Videos and Pretrained
Language-Vision Models [50.42886595228255]
We propose to learn the desired text-audio correspondence by leveraging the visual modality as a bridge.
We train a conditional diffusion model to generate the audio track of a video, given a video frame encoded by a pretrained contrastive language-image pretraining model.
arXiv Detail & Related papers (2023-06-16T05:42:01Z) - DiffAVA: Personalized Text-to-Audio Generation with Visual Alignment [30.38594416942543]
We propose a novel and personalized text-to-sound generation approach with visual alignment based on latent diffusion models, namely DiffAVA.
Our DiffAVA leverages a multi-head attention transformer to aggregate temporal information from video features, and a dual multi-modal residual network to fuse temporal visual representations with text embeddings.
Experimental results on the AudioCaps dataset demonstrate that the proposed DiffAVA can achieve competitive performance on visual-aligned text-to-audio generation.
arXiv Detail & Related papers (2023-05-22T10:37:27Z) - Sounding Video Generator: A Unified Framework for Text-guided Sounding
Video Generation [24.403772976932487]
Sounding Video Generator (SVG) is a unified framework for generating realistic videos along with audio signals.
VQGAN transforms visual frames and audio melspectrograms into discrete tokens.
Transformer-based decoder is used to model associations between texts, visual frames, and audio signals.
arXiv Detail & Related papers (2023-03-29T09:07:31Z) - End-to-End Video-To-Speech Synthesis using Generative Adversarial
Networks [54.43697805589634]
We propose a new end-to-end video-to-speech model based on Generative Adversarial Networks (GANs)
Our model consists of an encoder-decoder architecture that receives raw video as input and generates speech.
We show that this model is able to reconstruct speech with remarkable realism for constrained datasets such as GRID.
arXiv Detail & Related papers (2021-04-27T17:12:30Z) - Audio-based Near-Duplicate Video Retrieval with Audio Similarity
Learning [19.730467023817123]
We propose the Audio Similarity Learning (AuSiL) approach that effectively captures temporal patterns of audio similarity between video pairs.
We train our network following a triplet generation process and optimize the triplet loss function.
The proposed approach achieves very competitive results compared to three state-of-the-art methods.
arXiv Detail & Related papers (2020-10-17T08:12:18Z) - Unsupervised Audiovisual Synthesis via Exemplar Autoencoders [59.13989658692953]
We present an unsupervised approach that converts the input speech of any individual into audiovisual streams of potentially-infinitely many output speakers.
We use Exemplar Autoencoders to learn the voice, stylistic prosody, and visual appearance of a specific target speech exemplar.
arXiv Detail & Related papers (2020-01-13T18:56:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.