Unsupervised Source Separation via Bayesian Inference in the Latent
Domain
- URL: http://arxiv.org/abs/2110.05313v1
- Date: Mon, 11 Oct 2021 14:32:55 GMT
- Title: Unsupervised Source Separation via Bayesian Inference in the Latent
Domain
- Authors: Michele Mancusi, Emilian Postolache, Marco Fumero, Andrea Santilli,
Luca Cosmo, Emanuele Rodol\`a
- Abstract summary: State of the art audio source separation models rely on supervised data-driven approaches.
We propose a simple yet effective unsupervised separation algorithm, which operates directly on a latent representation of time-domain signals.
We validate our approach on the Slakh dataset arXiv:1909.08494, demonstrating results in line with state of the art supervised approaches.
- Score: 4.583433328833251
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: State of the art audio source separation models rely on supervised
data-driven approaches, which can be expensive in terms of labeling resources.
On the other hand, approaches for training these models without any direct
supervision are typically high-demanding in terms of memory and time
requirements, and remain impractical to be used at inference time. We aim to
tackle these limitations by proposing a simple yet effective unsupervised
separation algorithm, which operates directly on a latent representation of
time-domain signals. Our algorithm relies on deep Bayesian priors in the form
of pre-trained autoregressive networks to model the probability distributions
of each source. We leverage the low cardinality of the discrete latent space,
trained with a novel loss term imposing a precise arithmetic structure on it,
to perform exact Bayesian inference without relying on an approximation
strategy. We validate our approach on the Slakh dataset arXiv:1909.08494,
demonstrating results in line with state of the art supervised approaches while
requiring fewer resources with respect to other unsupervised methods.
Related papers
- Source-Free Domain-Invariant Performance Prediction [68.39031800809553]
We propose a source-free approach centred on uncertainty-based estimation, using a generative model for calibration in the absence of source data.
Our experiments on benchmark object recognition datasets reveal that existing source-based methods fall short with limited source sample availability.
Our approach significantly outperforms the current state-of-the-art source-free and source-based methods, affirming its effectiveness in domain-invariant performance estimation.
arXiv Detail & Related papers (2024-08-05T03:18:58Z) - Unsupervised Training of Convex Regularizers using Maximum Likelihood Estimation [12.625383613718636]
We propose an unsupervised approach using maximum marginal likelihood estimation to train a convex neural network-based image regularization term directly on noisy measurements.
Experiments demonstrate that the proposed method produces priors that are near competitive when compared to the analogous supervised training method for various image corruption operators.
arXiv Detail & Related papers (2024-04-08T12:27:00Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Direct Unsupervised Denoising [60.71146161035649]
Unsupervised denoisers do not directly produce a single prediction, such as the MMSE estimate.
We present an alternative approach that trains a deterministic network alongside the VAE to directly predict a central tendency.
arXiv Detail & Related papers (2023-10-27T13:02:12Z) - Observation-Guided Diffusion Probabilistic Models [41.749374023639156]
We propose a novel diffusion-based image generation method called the observation-guided diffusion probabilistic model (OGDM)
Our approach reestablishes the training objective by integrating the guidance of the observation process with the Markov chain.
We demonstrate the effectiveness of our training algorithm using diverse inference techniques on strong diffusion model baselines.
arXiv Detail & Related papers (2023-10-06T06:29:06Z) - Exploiting Temporal Structures of Cyclostationary Signals for
Data-Driven Single-Channel Source Separation [98.95383921866096]
We study the problem of single-channel source separation (SCSS)
We focus on cyclostationary signals, which are particularly suitable in a variety of application domains.
We propose a deep learning approach using a U-Net architecture, which is competitive with the minimum MSE estimator.
arXiv Detail & Related papers (2022-08-22T14:04:56Z) - A Prototype-Oriented Framework for Unsupervised Domain Adaptation [52.25537670028037]
We provide a memory and computation-efficient probabilistic framework to extract class prototypes and align the target features with them.
We demonstrate the general applicability of our method on a wide range of scenarios, including single-source, multi-source, class-imbalance, and source-private domain adaptation.
arXiv Detail & Related papers (2021-10-22T19:23:22Z) - Bayesian Imaging With Data-Driven Priors Encoded by Neural Networks:
Theory, Methods, and Algorithms [2.266704469122763]
This paper proposes a new methodology for performing Bayesian inference in imaging inverse problems where the prior knowledge is available in the form of training data.
We establish the existence and well-posedness of the associated posterior moments under easily verifiable conditions.
A model accuracy analysis suggests that the Bayesian probability probabilities reported by the data-driven models are also remarkably accurate under a frequentist definition.
arXiv Detail & Related papers (2021-03-18T11:34:08Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Unsupervised Audio Source Separation using Generative Priors [43.35195236159189]
We propose a novel approach for audio source separation based on generative priors trained on individual sources.
Our approach simultaneously searches in the source-specific latent spaces to effectively recover the constituent sources.
arXiv Detail & Related papers (2020-05-28T03:57:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.