Large-Sample Properties of Non-Stationary Source Separation for Gaussian
Signals
- URL: http://arxiv.org/abs/2209.10176v1
- Date: Wed, 21 Sep 2022 08:13:20 GMT
- Title: Large-Sample Properties of Non-Stationary Source Separation for Gaussian
Signals
- Authors: Fran\c{c}ois Bachoc, Christoph Muehlmann, Klaus Nordhausen, Joni Virta
- Abstract summary: We develop large-sample theory for NSS-JD, a popular method of non-stationary source separation.
We show that the consistency of the unmixing estimator and its convergence to a limiting Gaussian distribution at the standard square root rate are shown to hold.
Simulation experiments are used to verify the theoretical results and to study the impact of block length on the separation.
- Score: 2.2557806157585834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-stationary source separation is a well-established branch of blind source
separation with many different methods. However, for none of these methods
large-sample results are available. To bridge this gap, we develop large-sample
theory for NSS-JD, a popular method of non-stationary source separation based
on the joint diagonalization of block-wise covariance matrices. We work under
an instantaneous linear mixing model for independent Gaussian non-stationary
source signals together with a very general set of assumptions: besides
boundedness conditions, the only assumptions we make are that the sources
exhibit finite dependency and that their variance functions differ sufficiently
to be asymptotically separable. The consistency of the unmixing estimator and
its convergence to a limiting Gaussian distribution at the standard square root
rate are shown to hold under the previous conditions. Simulation experiments
are used to verify the theoretical results and to study the impact of block
length on the separation.
Related papers
- Sourcerer: Sample-based Maximum Entropy Source Distribution Estimation [5.673617376471343]
We propose an approach which targets the maximum entropy distribution, i.e., prioritizes retaining as much uncertainty as possible.
Our method is purely sample-based - leveraging the Sliced-Wasserstein distance to measure the discrepancy between the dataset and simulations.
To demonstrate the utility of our approach, we infer source distributions for parameters of the Hodgkin-Huxley model from experimental datasets with thousands of single-neuron measurements.
arXiv Detail & Related papers (2024-02-12T17:13:02Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Robust Independence Tests with Finite Sample Guarantees for Synchronous
Stochastic Linear Systems [0.0]
The paper introduces robust independence tests with non-asymptotically guaranteed significance levels for linear time-invariant systems.
Our method provides bounds for the type I error probabilities that are distribution-free, i.e., the innovations can have arbitrary distributions.
arXiv Detail & Related papers (2023-08-03T21:13:34Z) - Statistically Optimal Generative Modeling with Maximum Deviation from the Empirical Distribution [2.1146241717926664]
We show that the Wasserstein GAN, constrained to left-invertible push-forward maps, generates distributions that avoid replication and significantly deviate from the empirical distribution.
Our most important contribution provides a finite-sample lower bound on the Wasserstein-1 distance between the generative distribution and the empirical one.
We also establish a finite-sample upper bound on the distance between the generative distribution and the true data-generating one.
arXiv Detail & Related papers (2023-07-31T06:11:57Z) - Score-based Source Separation with Applications to Digital Communication
Signals [72.6570125649502]
We propose a new method for separating superimposed sources using diffusion-based generative models.
Motivated by applications in radio-frequency (RF) systems, we are interested in sources with underlying discrete nature.
Our method can be viewed as a multi-source extension to the recently proposed score distillation sampling scheme.
arXiv Detail & Related papers (2023-06-26T04:12:40Z) - Data thinning for convolution-closed distributions [2.299914829977005]
We propose data thinning, an approach for splitting an observation into two or more independent parts that sum to the original observation.
We show that data thinning can be used to validate the results of unsupervised learning approaches.
arXiv Detail & Related papers (2023-01-18T02:47:41Z) - Machine-Learned Exclusion Limits without Binning [0.0]
We extend the Machine-Learned Likelihoods (MLL) method by including Kernel Density Estimators (KDE) to extract one-dimensional signal and background probability density functions.
We apply the method to two cases of interest at the LHC: a search for exotic Higgs bosons, and a $Z'$ boson decaying into lepton pairs.
arXiv Detail & Related papers (2022-11-09T11:04:50Z) - Targeted Separation and Convergence with Kernel Discrepancies [66.48817218787006]
kernel-based discrepancy measures are required to (i) separate a target P from other probability measures or (ii) control weak convergence to P.
In this article we derive new sufficient and necessary conditions to ensure (i) and (ii)
For MMDs on separable metric spaces, we characterize those kernels that separate Bochner embeddable measures and introduce simple conditions for separating all measures with unbounded kernels.
arXiv Detail & Related papers (2022-09-26T16:41:16Z) - Robust Estimation for Nonparametric Families via Generative Adversarial
Networks [92.64483100338724]
We provide a framework for designing Generative Adversarial Networks (GANs) to solve high dimensional robust statistics problems.
Our work extend these to robust mean estimation, second moment estimation, and robust linear regression.
In terms of techniques, our proposed GAN losses can be viewed as a smoothed and generalized Kolmogorov-Smirnov distance.
arXiv Detail & Related papers (2022-02-02T20:11:33Z) - A Robust and Flexible EM Algorithm for Mixtures of Elliptical
Distributions with Missing Data [71.9573352891936]
This paper tackles the problem of missing data imputation for noisy and non-Gaussian data.
A new EM algorithm is investigated for mixtures of elliptical distributions with the property of handling potential missing data.
Experimental results on synthetic data demonstrate that the proposed algorithm is robust to outliers and can be used with non-Gaussian data.
arXiv Detail & Related papers (2022-01-28T10:01:37Z) - Nonlinear Independent Component Analysis for Continuous-Time Signals [85.59763606620938]
We study the classical problem of recovering a multidimensional source process from observations of mixtures of this process.
We show that this recovery is possible for many popular models of processes (up to order and monotone scaling of their coordinates) if the mixture is given by a sufficiently differentiable, invertible function.
arXiv Detail & Related papers (2021-02-04T20:28:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.