An Initial Exploration: Learning to Generate Realistic Audio for Silent
Video
- URL: http://arxiv.org/abs/2308.12408v1
- Date: Wed, 23 Aug 2023 20:08:56 GMT
- Title: An Initial Exploration: Learning to Generate Realistic Audio for Silent
Video
- Authors: Matthew Martel, Jackson Wagner
- Abstract summary: We develop a framework that observes video in it's natural sequence and generates realistic audio to accompany it.
Notably, we have reason to believe this is achievable due to advancements in realistic audio generation techniques conditioned on other inputs.
We find that the transformer-based architecture yields the most promising results, matching low-frequencies to visual patterns effectively.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Generating realistic audio effects for movies and other media is a
challenging task that is accomplished today primarily through physical
techniques known as Foley art. Foley artists create sounds with common objects
(e.g., boxing gloves, broken glass) in time with video as it is playing to
generate captivating audio tracks. In this work, we aim to develop a
deep-learning based framework that does much the same - observes video in it's
natural sequence and generates realistic audio to accompany it. Notably, we
have reason to believe this is achievable due to advancements in realistic
audio generation techniques conditioned on other inputs (e.g., Wavenet
conditioned on text). We explore several different model architectures to
accomplish this task that process both previously-generated audio and video
context. These include deep-fusion CNN, dilated Wavenet CNN with visual
context, and transformer-based architectures. We find that the
transformer-based architecture yields the most promising results, matching
low-frequencies to visual patterns effectively, but failing to generate more
nuanced waveforms.
Related papers
- Read, Watch and Scream! Sound Generation from Text and Video [23.990569918960315]
We propose a novel video-and-text-to-sound generation method called ReWaS.
Our method estimates the structural information of audio from the video while receiving key content cues from a user prompt.
By separating the generative components of audio, it becomes a more flexible system that allows users to freely adjust the energy, surrounding environment, and primary sound source according to their preferences.
arXiv Detail & Related papers (2024-07-08T01:59:17Z) - Action2Sound: Ambient-Aware Generation of Action Sounds from Egocentric Videos [87.32349247938136]
Existing approaches implicitly assume total correspondence between the video and audio during training.
We propose a novel ambient-aware audio generation model, AV-LDM.
Our approach is the first to focus video-to-audio generation faithfully on the observed visual content.
arXiv Detail & Related papers (2024-06-13T16:10:19Z) - Seeing and Hearing: Open-domain Visual-Audio Generation with Diffusion
Latent Aligners [69.70590867769408]
Video and audio content creation serves as the core technique for the movie industry and professional users.
Existing diffusion-based methods tackle video and audio generation separately, which hinders the technique transfer from academia to industry.
In this work, we aim at filling the gap, with a carefully designed optimization-based framework for cross-visual-audio and joint-visual-audio generation.
arXiv Detail & Related papers (2024-02-27T17:57:04Z) - Visual Acoustic Matching [92.91522122739845]
We introduce the visual acoustic matching task, in which an audio clip is transformed to sound like it was recorded in a target environment.
Given an image of the target environment and a waveform for the source audio, the goal is to re-synthesize the audio to match the target room acoustics as suggested by its visible geometry and materials.
arXiv Detail & Related papers (2022-02-14T17:05:22Z) - Geometry-Aware Multi-Task Learning for Binaural Audio Generation from
Video [94.42811508809994]
We propose an audio spatialization method that draws on visual information in videos to convert their monaural (single-channel) audio to audio.
Whereas existing approaches leverage visual features extracted directly from video frames, our approach explicitly disentangles the geometric cues present in the visual stream to guide the learning process.
arXiv Detail & Related papers (2021-11-21T19:26:45Z) - Generating Visually Aligned Sound from Videos [83.89485254543888]
We focus on the task of generating sound from natural videos.
The sound should be both temporally and content-wise aligned with visual signals.
Some sounds generated outside of a camera can not be inferred from video content.
arXiv Detail & Related papers (2020-07-14T07:51:06Z) - AutoFoley: Artificial Synthesis of Synchronized Sound Tracks for Silent
Videos with Deep Learning [5.33024001730262]
We present AutoFoley, a fully-automated deep learning tool that can be used to synthesize a representative audio track for videos.
AutoFoley can be used in the applications where there is either no corresponding audio file associated with the video or in cases where there is a need to identify critical scenarios.
Our experiments show that the synthesized sounds are realistically portrayed with accurate temporal synchronization of the associated visual inputs.
arXiv Detail & Related papers (2020-02-21T09:08:28Z) - Everybody's Talkin': Let Me Talk as You Want [134.65914135774605]
We present a method to edit a target portrait footage by taking a sequence of audio as input to synthesize a photo-realistic video.
It does not assume a person-specific rendering network yet capable of translating arbitrary source audio into arbitrary video output.
arXiv Detail & Related papers (2020-01-15T09:54:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.