Audeo: Audio Generation for a Silent Performance Video
- URL: http://arxiv.org/abs/2006.14348v1
- Date: Tue, 23 Jun 2020 00:58:59 GMT
- Title: Audeo: Audio Generation for a Silent Performance Video
- Authors: Kun Su, Xiulong Liu, Eli Shlizerman
- Abstract summary: We present a novel system that gets as an input video frames of a musician playing the piano and generates the music for that video.
Our main aim in this work is to explore the plausibility of such a transformation and to identify cues and components able to carry the association of sounds with visual events.
- Score: 17.705770346082023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel system that gets as an input video frames of a musician
playing the piano and generates the music for that video. Generation of music
from visual cues is a challenging problem and it is not clear whether it is an
attainable goal at all. Our main aim in this work is to explore the
plausibility of such a transformation and to identify cues and components able
to carry the association of sounds with visual events. To achieve the
transformation we built a full pipeline named `\textit{Audeo}' containing three
components. We first translate the video frames of the keyboard and the
musician hand movements into raw mechanical musical symbolic representation
Piano-Roll (Roll) for each video frame which represents the keys pressed at
each time step. We then adapt the Roll to be amenable for audio synthesis by
including temporal correlations. This step turns out to be critical for
meaningful audio generation. As a last step, we implement Midi synthesizers to
generate realistic music. \textit{Audeo} converts video to audio smoothly and
clearly with only a few setup constraints. We evaluate \textit{Audeo} on `in
the wild' piano performance videos and obtain that their generated music is of
reasonable audio quality and can be successfully recognized with high precision
by popular music identification software.
Related papers
- VidMuse: A Simple Video-to-Music Generation Framework with Long-Short-Term Modeling [68.72384258320743]
We propose VidMuse, a framework for generating music aligned with video inputs.
VidMuse produces high-fidelity music that is both acoustically and semantically aligned with the video.
arXiv Detail & Related papers (2024-06-06T17:58:11Z) - Video2Music: Suitable Music Generation from Videos using an Affective
Multimodal Transformer model [32.801213106782335]
We develop a generative music AI framework, Video2Music, that can match a provided video.
In a thorough experiment, we show that our proposed framework can generate music that matches the video content in terms of emotion.
arXiv Detail & Related papers (2023-11-02T03:33:00Z) - Quantized GAN for Complex Music Generation from Dance Videos [48.196705493763986]
We present Dance2Music-GAN (D2M-GAN), a novel adversarial multi-modal framework that generates musical samples conditioned on dance videos.
Our proposed framework takes dance video frames and human body motion as input, and learns to generate music samples that plausibly accompany the corresponding input.
arXiv Detail & Related papers (2022-04-01T17:53:39Z) - Strumming to the Beat: Audio-Conditioned Contrastive Video Textures [112.6140796961121]
We introduce a non-parametric approach for infinite video texture synthesis using a representation learned via contrastive learning.
We take inspiration from Video Textures, which showed that plausible new videos could be generated from a single one by stitching its frames together in a novel yet consistent order.
Our model outperforms baselines on human perceptual scores, can handle a diverse range of input videos, and can combine semantic and audio-visual cues in order to synthesize videos that synchronize well with an audio signal.
arXiv Detail & Related papers (2021-04-06T17:24:57Z) - Lets Play Music: Audio-driven Performance Video Generation [58.77609661515749]
We propose a new task named Audio-driven Per-formance Video Generation (APVG)
APVG aims to synthesize the video of a person playing a certain instrument guided by a given music audio clip.
arXiv Detail & Related papers (2020-11-05T03:13:46Z) - Foley Music: Learning to Generate Music from Videos [115.41099127291216]
Foley Music is a system that can synthesize plausible music for a silent video clip about people playing musical instruments.
We first identify two key intermediate representations for a successful video to music generator: body keypoints from videos and MIDI events from audio recordings.
We present a Graph$-$Transformer framework that can accurately predict MIDI event sequences in accordance with the body movements.
arXiv Detail & Related papers (2020-07-21T17:59:06Z) - Generating Visually Aligned Sound from Videos [83.89485254543888]
We focus on the task of generating sound from natural videos.
The sound should be both temporally and content-wise aligned with visual signals.
Some sounds generated outside of a camera can not be inferred from video content.
arXiv Detail & Related papers (2020-07-14T07:51:06Z) - AutoFoley: Artificial Synthesis of Synchronized Sound Tracks for Silent
Videos with Deep Learning [5.33024001730262]
We present AutoFoley, a fully-automated deep learning tool that can be used to synthesize a representative audio track for videos.
AutoFoley can be used in the applications where there is either no corresponding audio file associated with the video or in cases where there is a need to identify critical scenarios.
Our experiments show that the synthesized sounds are realistically portrayed with accurate temporal synchronization of the associated visual inputs.
arXiv Detail & Related papers (2020-02-21T09:08:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.