Generating Separated Singing Vocals Using a Diffusion Model Conditioned on Music Mixtures
- URL: http://arxiv.org/abs/2511.21342v1
- Date: Wed, 26 Nov 2025 12:49:35 GMT
- Title: Generating Separated Singing Vocals Using a Diffusion Model Conditioned on Music Mixtures
- Authors: GenĂs Plaja-Roglans, Yun-Ning Hung, Xavier Serra, Igor Pereira,
- Abstract summary: In this work, we explore singing voice separation from real music recordings using a diffusion model.<n>We present a study of the sampling algorithm, highlighting the effects of the user-configurable parameters.
- Score: 12.393086516044866
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Separating the individual elements in a musical mixture is an essential process for music analysis and practice. While this is generally addressed using neural networks optimized to mask or transform the time-frequency representation of a mixture to extract the target sources, the flexibility and generalization capabilities of generative diffusion models are giving rise to a novel class of solutions for this complicated task. In this work, we explore singing voice separation from real music recordings using a diffusion model which is trained to generate the solo vocals conditioned on the corresponding mixture. Our approach improves upon prior generative systems and achieves competitive objective scores against non-generative baselines when trained with supplementary data. The iterative nature of diffusion sampling enables the user to control the quality-efficiency trade-off, and also refine the output when needed. We present an ablation study of the sampling algorithm, highlighting the effects of the user-configurable parameters.
Related papers
- Efficient and Fast Generative-Based Singing Voice Separation using a Latent Diffusion Model [12.393086516044866]
In this work, we study the potential of diffusion models to advance toward bridging this gap.<n>We focus on generative singing voice separation relying on corresponding pairs of isolated vocals and mixtures for training.<n>To align with creative mixtures, we leverage latent diffusion: the system generates samples encoded in a compact latent space, and subsequently decodes these into audio.
arXiv Detail & Related papers (2025-11-25T16:34:07Z) - High-Quality Sound Separation Across Diverse Categories via Visually-Guided Generative Modeling [65.02357548201188]
We propose DAVIS, a Diffusion-based Audio-VIsual Separation framework that solves the audio-visual sound source separation task through generative learning.<n>Our framework operates by synthesizing the desired separated sound spectrograms directly from a noise distribution, conditioned concurrently on the mixed audio input and associated visual information.
arXiv Detail & Related papers (2025-09-26T08:46:00Z) - Unleashing the Power of Natural Audio Featuring Multiple Sound Sources [54.38251699625379]
Universal sound separation aims to extract clean audio tracks corresponding to distinct events from mixed audio.<n>We propose ClearSep, a framework that employs a data engine to decompose complex naturally mixed audio into multiple independent tracks.<n>In experiments, ClearSep achieves state-of-the-art performance across multiple sound separation tasks.
arXiv Detail & Related papers (2025-04-24T17:58:21Z) - Score-informed Music Source Separation: Improving Synthetic-to-real Generalization in Classical Music [8.468436398420764]
Music source separation is the task of separating a mixture of instruments into constituent tracks.<n>We propose two ways of using musical scores to aid music source separation: a score-informed model and a score-only model.<n>The score-informed model improves separation results compared to a baseline approach, but struggles to generalize from synthetic to real data.
arXiv Detail & Related papers (2025-03-10T14:08:31Z) - Dimension-free Score Matching and Time Bootstrapping for Diffusion Models [19.62665684173391]
Diffusion models generate samples by estimating the score function of the target distribution at various noise levels.<n>We introduce a martingale-based error decomposition and sharp variance bounds, enabling efficient learning from dependent data.<n>Building on these insights, we propose Bootstrapped Score Matching (BSM), a variance reduction technique that leverages previously learned scores to improve accuracy at higher noise levels.
arXiv Detail & Related papers (2025-02-14T18:32:22Z) - Blue noise for diffusion models [50.99852321110366]
We introduce a novel and general class of diffusion models taking correlated noise within and across images into account.
Our framework allows introducing correlation across images within a single mini-batch to improve gradient flow.
We perform both qualitative and quantitative evaluations on a variety of datasets using our method.
arXiv Detail & Related papers (2024-02-07T14:59:25Z) - Bass Accompaniment Generation via Latent Diffusion [0.0]
We present a controllable system for generating single stems to accompany musical mixes of arbitrary length.
At the core of our method are audio autoencoders that efficiently compress audio waveform samples into invertible latent representations.
Our controllable conditional audio generation framework represents a significant step forward in creating generative AI tools to assist musicians in music production.
arXiv Detail & Related papers (2024-02-02T13:44:47Z) - SpecDiff-GAN: A Spectrally-Shaped Noise Diffusion GAN for Speech and
Music Synthesis [0.0]
We introduce SpecDiff-GAN, a neural vocoder based on HiFi-GAN.
We show the merits of our proposed model for speech and music synthesis on several datasets.
arXiv Detail & Related papers (2024-01-30T09:17:57Z) - Boosting Fast and High-Quality Speech Synthesis with Linear Diffusion [85.54515118077825]
This paper proposes a linear diffusion model (LinDiff) based on an ordinary differential equation to simultaneously reach fast inference and high sample quality.
To reduce computational complexity, LinDiff employs a patch-based processing approach that partitions the input signal into small patches.
Our model can synthesize speech of a quality comparable to that of autoregressive models with faster synthesis speed.
arXiv Detail & Related papers (2023-06-09T07:02:43Z) - CRASH: Raw Audio Score-based Generative Modeling for Controllable
High-resolution Drum Sound Synthesis [0.0]
We propose a novel score-base generative model for unconditional raw audio synthesis.
Our proposed method closes the gap with GAN-based methods on raw audio, while offering more flexible generation capabilities.
arXiv Detail & Related papers (2021-06-14T13:48:03Z) - DiffSinger: Diffusion Acoustic Model for Singing Voice Synthesis [53.19363127760314]
DiffSinger is a parameterized Markov chain which iteratively converts the noise into mel-spectrogram conditioned on the music score.
The evaluations conducted on the Chinese singing dataset demonstrate that DiffSinger outperforms state-of-the-art SVS work with a notable margin.
arXiv Detail & Related papers (2021-05-06T05:21:42Z) - Unsupervised Cross-Domain Singing Voice Conversion [105.1021715879586]
We present a wav-to-wav generative model for the task of singing voice conversion from any identity.
Our method utilizes both an acoustic model, trained for the task of automatic speech recognition, together with melody extracted features to drive a waveform-based generator.
arXiv Detail & Related papers (2020-08-06T18:29:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.