Video2Music: Suitable Music Generation from Videos using an Affective
Multimodal Transformer model
- URL: http://arxiv.org/abs/2311.00968v2
- Date: Mon, 4 Mar 2024 07:54:31 GMT
- Title: Video2Music: Suitable Music Generation from Videos using an Affective
Multimodal Transformer model
- Authors: Jaeyong Kang, Soujanya Poria, Dorien Herremans
- Abstract summary: We develop a generative music AI framework, Video2Music, that can match a provided video.
In a thorough experiment, we show that our proposed framework can generate music that matches the video content in terms of emotion.
- Score: 32.801213106782335
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Numerous studies in the field of music generation have demonstrated
impressive performance, yet virtually no models are able to directly generate
music to match accompanying videos. In this work, we develop a generative music
AI framework, Video2Music, that can match a provided video. We first curated a
unique collection of music videos. Then, we analysed the music videos to obtain
semantic, scene offset, motion, and emotion features. These distinct features
are then employed as guiding input to our music generation model. We transcribe
the audio files into MIDI and chords, and extract features such as note density
and loudness. This results in a rich multimodal dataset, called MuVi-Sync, on
which we train a novel Affective Multimodal Transformer (AMT) model to generate
music given a video. This model includes a novel mechanism to enforce affective
similarity between video and music. Finally, post-processing is performed based
on a biGRU-based regression model to estimate note density and loudness based
on the video features. This ensures a dynamic rendering of the generated chords
with varying rhythm and volume. In a thorough experiment, we show that our
proposed framework can generate music that matches the video content in terms
of emotion. The musical quality, along with the quality of music-video matching
is confirmed in a user study. The proposed AMT model, along with the new
MuVi-Sync dataset, presents a promising step for the new task of music
generation for videos.
Related papers
- MuVi: Video-to-Music Generation with Semantic Alignment and Rhythmic Synchronization [52.498942604622165]
This paper presents MuVi, a framework to generate music that aligns with video content.
MuVi analyzes video content through a specially designed visual adaptor to extract contextually and temporally relevant features.
We show that MuVi demonstrates superior performance in both audio quality and temporal synchronization.
arXiv Detail & Related papers (2024-10-16T18:44:56Z) - UniMuMo: Unified Text, Music and Motion Generation [57.72514622935806]
We introduce UniMuMo, a unified multimodal model capable of taking arbitrary text, music, and motion data as input conditions to generate outputs across all three modalities.
By converting music, motion, and text into token-based representation, our model bridges these modalities through a unified encoder-decoder transformer architecture.
arXiv Detail & Related papers (2024-10-06T16:04:05Z) - VMAS: Video-to-Music Generation via Semantic Alignment in Web Music Videos [32.741262543860934]
We present a framework for learning to generate background music from video inputs.
We develop a generative video-music Transformer with a novel semantic video-music alignment scheme.
New temporal video encoder architecture allows us to efficiently process videos consisting of many densely sampled frames.
arXiv Detail & Related papers (2024-09-11T17:56:48Z) - VidMuse: A Simple Video-to-Music Generation Framework with Long-Short-Term Modeling [71.01050359126141]
We propose VidMuse, a framework for generating music aligned with video inputs.
VidMuse produces high-fidelity music that is both acoustically and semantically aligned with the video.
arXiv Detail & Related papers (2024-06-06T17:58:11Z) - Diff-BGM: A Diffusion Model for Video Background Music Generation [16.94631443719866]
We propose a high-quality music-video dataset with detailed annotation and shot detection to provide multi-modal information about the video and music.
We then present evaluation metrics to assess music quality, including music diversity and alignment between music and video.
We propose the Diff-BGM framework to automatically generate the background music for a given video, which uses different signals to control different aspects of the music during the generation process.
arXiv Detail & Related papers (2024-05-20T09:48:36Z) - V2Meow: Meowing to the Visual Beat via Video-to-Music Generation [47.076283429992664]
V2Meow is a video-to-music generation system capable of producing high-quality music audio for a diverse range of video input types.
It synthesizes high-fidelity music audio waveforms solely by conditioning on pre-trained general-purpose visual features extracted from video frames.
arXiv Detail & Related papers (2023-05-11T06:26:41Z) - Video Background Music Generation: Dataset, Method and Evaluation [31.15901120245794]
We introduce a complete recipe including dataset, benchmark model, and evaluation metric for video background music generation.
We present SymMV, a video and symbolic music dataset with various musical annotations.
We also propose a benchmark video background music generation framework named V-MusProd.
arXiv Detail & Related papers (2022-11-21T08:39:48Z) - Lets Play Music: Audio-driven Performance Video Generation [58.77609661515749]
We propose a new task named Audio-driven Per-formance Video Generation (APVG)
APVG aims to synthesize the video of a person playing a certain instrument guided by a given music audio clip.
arXiv Detail & Related papers (2020-11-05T03:13:46Z) - Foley Music: Learning to Generate Music from Videos [115.41099127291216]
Foley Music is a system that can synthesize plausible music for a silent video clip about people playing musical instruments.
We first identify two key intermediate representations for a successful video to music generator: body keypoints from videos and MIDI events from audio recordings.
We present a Graph$-$Transformer framework that can accurately predict MIDI event sequences in accordance with the body movements.
arXiv Detail & Related papers (2020-07-21T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.