Video2Music: Suitable Music Generation from Videos using an Affective
Multimodal Transformer model
- URL: http://arxiv.org/abs/2311.00968v2
- Date: Mon, 4 Mar 2024 07:54:31 GMT
- Title: Video2Music: Suitable Music Generation from Videos using an Affective
Multimodal Transformer model
- Authors: Jaeyong Kang, Soujanya Poria, Dorien Herremans
- Abstract summary: We develop a generative music AI framework, Video2Music, that can match a provided video.
In a thorough experiment, we show that our proposed framework can generate music that matches the video content in terms of emotion.
- Score: 32.801213106782335
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Numerous studies in the field of music generation have demonstrated
impressive performance, yet virtually no models are able to directly generate
music to match accompanying videos. In this work, we develop a generative music
AI framework, Video2Music, that can match a provided video. We first curated a
unique collection of music videos. Then, we analysed the music videos to obtain
semantic, scene offset, motion, and emotion features. These distinct features
are then employed as guiding input to our music generation model. We transcribe
the audio files into MIDI and chords, and extract features such as note density
and loudness. This results in a rich multimodal dataset, called MuVi-Sync, on
which we train a novel Affective Multimodal Transformer (AMT) model to generate
music given a video. This model includes a novel mechanism to enforce affective
similarity between video and music. Finally, post-processing is performed based
on a biGRU-based regression model to estimate note density and loudness based
on the video features. This ensures a dynamic rendering of the generated chords
with varying rhythm and volume. In a thorough experiment, we show that our
proposed framework can generate music that matches the video content in terms
of emotion. The musical quality, along with the quality of music-video matching
is confirmed in a user study. The proposed AMT model, along with the new
MuVi-Sync dataset, presents a promising step for the new task of music
generation for videos.
Related papers
- VidMuse: A Simple Video-to-Music Generation Framework with Long-Short-Term Modeling [68.72384258320743]
We propose VidMuse, a framework for generating music aligned with video inputs.
VidMuse produces high-fidelity music that is both acoustically and semantically aligned with the video.
arXiv Detail & Related papers (2024-06-06T17:58:11Z) - Diff-BGM: A Diffusion Model for Video Background Music Generation [16.94631443719866]
We propose a high-quality music-video dataset with detailed annotation and shot detection to provide multi-modal information about the video and music.
We then present evaluation metrics to assess music quality, including music diversity and alignment between music and video.
We propose the Diff-BGM framework to automatically generate the background music for a given video, which uses different signals to control different aspects of the music during the generation process.
arXiv Detail & Related papers (2024-05-20T09:48:36Z) - Simple and Controllable Music Generation [94.61958781346176]
MusicGen is a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens.
Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns.
arXiv Detail & Related papers (2023-06-08T15:31:05Z) - V2Meow: Meowing to the Visual Beat via Video-to-Music Generation [47.076283429992664]
V2Meow is a video-to-music generation system capable of producing high-quality music audio for a diverse range of video input types.
It synthesizes high-fidelity music audio waveforms solely by conditioning on pre-trained general-purpose visual features extracted from video frames.
arXiv Detail & Related papers (2023-05-11T06:26:41Z) - Video Background Music Generation: Dataset, Method and Evaluation [31.15901120245794]
We introduce a complete recipe including dataset, benchmark model, and evaluation metric for video background music generation.
We present SymMV, a video and symbolic music dataset with various musical annotations.
We also propose a benchmark video background music generation framework named V-MusProd.
arXiv Detail & Related papers (2022-11-21T08:39:48Z) - Quantized GAN for Complex Music Generation from Dance Videos [48.196705493763986]
We present Dance2Music-GAN (D2M-GAN), a novel adversarial multi-modal framework that generates musical samples conditioned on dance videos.
Our proposed framework takes dance video frames and human body motion as input, and learns to generate music samples that plausibly accompany the corresponding input.
arXiv Detail & Related papers (2022-04-01T17:53:39Z) - InverseMV: Composing Piano Scores with a Convolutional Video-Music
Transformer [2.157478102241537]
We propose a novel attention-based model VMT that automatically generates piano scores from video frames.
Using music generated from models also prevent potential copyright infringements.
We release a new dataset composed of over 7 hours of piano scores with fine alignment between pop music videos and MIDI files.
arXiv Detail & Related papers (2021-12-31T06:39:28Z) - Lets Play Music: Audio-driven Performance Video Generation [58.77609661515749]
We propose a new task named Audio-driven Per-formance Video Generation (APVG)
APVG aims to synthesize the video of a person playing a certain instrument guided by a given music audio clip.
arXiv Detail & Related papers (2020-11-05T03:13:46Z) - Foley Music: Learning to Generate Music from Videos [115.41099127291216]
Foley Music is a system that can synthesize plausible music for a silent video clip about people playing musical instruments.
We first identify two key intermediate representations for a successful video to music generator: body keypoints from videos and MIDI events from audio recordings.
We present a Graph$-$Transformer framework that can accurately predict MIDI event sequences in accordance with the body movements.
arXiv Detail & Related papers (2020-07-21T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.