Large-Sample Properties of Non-Stationary Source Separation for Gaussian
Signals
- URL: http://arxiv.org/abs/2209.10176v1
- Date: Wed, 21 Sep 2022 08:13:20 GMT
- Title: Large-Sample Properties of Non-Stationary Source Separation for Gaussian
Signals
- Authors: Fran\c{c}ois Bachoc, Christoph Muehlmann, Klaus Nordhausen, Joni Virta
- Abstract summary: We develop large-sample theory for NSS-JD, a popular method of non-stationary source separation.
We show that the consistency of the unmixing estimator and its convergence to a limiting Gaussian distribution at the standard square root rate are shown to hold.
Simulation experiments are used to verify the theoretical results and to study the impact of block length on the separation.
- Score: 2.2557806157585834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Non-stationary source separation is a well-established branch of blind source
separation with many different methods. However, for none of these methods
large-sample results are available. To bridge this gap, we develop large-sample
theory for NSS-JD, a popular method of non-stationary source separation based
on the joint diagonalization of block-wise covariance matrices. We work under
an instantaneous linear mixing model for independent Gaussian non-stationary
source signals together with a very general set of assumptions: besides
boundedness conditions, the only assumptions we make are that the sources
exhibit finite dependency and that their variance functions differ sufficiently
to be asymptotically separable. The consistency of the unmixing estimator and
its convergence to a limiting Gaussian distribution at the standard square root
rate are shown to hold under the previous conditions. Simulation experiments
are used to verify the theoretical results and to study the impact of block
length on the separation.
Related papers
- Theory on Score-Mismatched Diffusion Models and Zero-Shot Conditional Samplers [49.97755400231656]
We present the first performance guarantee with explicit dimensional general score-mismatched diffusion samplers.
We show that score mismatches result in an distributional bias between the target and sampling distributions, proportional to the accumulated mismatch between the target and training distributions.
This result can be directly applied to zero-shot conditional samplers for any conditional model, irrespective of measurement noise.
arXiv Detail & Related papers (2024-10-17T16:42:12Z) - Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis [56.442307356162864]
We study the theoretical aspects of score-based discrete diffusion models under the Continuous Time Markov Chain (CTMC) framework.
We introduce a discrete-time sampling algorithm in the general state space $[S]d$ that utilizes score estimators at predefined time points.
Our convergence analysis employs a Girsanov-based method and establishes key properties of the discrete score function.
arXiv Detail & Related papers (2024-10-03T09:07:13Z) - On the Identifiability of Sparse ICA without Assuming Non-Gaussianity [20.333908367541895]
We develop an identifiability theory that relies on second-order statistics without imposing further preconditions on the distribution of sources.
We propose two estimation methods based on second-order statistics and sparsity constraint.
arXiv Detail & Related papers (2024-08-19T18:51:42Z) - Sourcerer: Sample-based Maximum Entropy Source Distribution Estimation [5.673617376471343]
We propose an approach which targets the maximum entropy distribution, i.e., prioritizes retaining as much uncertainty as possible.
Our method is purely sample-based - leveraging the Sliced-Wasserstein distance to measure the discrepancy between the dataset and simulations.
To demonstrate the utility of our approach, we infer source distributions for parameters of the Hodgkin-Huxley model from experimental datasets with thousands of single-neuron measurements.
arXiv Detail & Related papers (2024-02-12T17:13:02Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Statistically Optimal Generative Modeling with Maximum Deviation from the Empirical Distribution [2.1146241717926664]
We show that the Wasserstein GAN, constrained to left-invertible push-forward maps, generates distributions that avoid replication and significantly deviate from the empirical distribution.
Our most important contribution provides a finite-sample lower bound on the Wasserstein-1 distance between the generative distribution and the empirical one.
We also establish a finite-sample upper bound on the distance between the generative distribution and the true data-generating one.
arXiv Detail & Related papers (2023-07-31T06:11:57Z) - Score-based Source Separation with Applications to Digital Communication
Signals [72.6570125649502]
We propose a new method for separating superimposed sources using diffusion-based generative models.
Motivated by applications in radio-frequency (RF) systems, we are interested in sources with underlying discrete nature.
Our method can be viewed as a multi-source extension to the recently proposed score distillation sampling scheme.
arXiv Detail & Related papers (2023-06-26T04:12:40Z) - Data thinning for convolution-closed distributions [2.299914829977005]
We propose data thinning, an approach for splitting an observation into two or more independent parts that sum to the original observation.
We show that data thinning can be used to validate the results of unsupervised learning approaches.
arXiv Detail & Related papers (2023-01-18T02:47:41Z) - Machine-Learned Exclusion Limits without Binning [0.0]
We extend the Machine-Learned Likelihoods (MLL) method by including Kernel Density Estimators (KDE) to extract one-dimensional signal and background probability density functions.
We apply the method to two cases of interest at the LHC: a search for exotic Higgs bosons, and a $Z'$ boson decaying into lepton pairs.
arXiv Detail & Related papers (2022-11-09T11:04:50Z) - Targeted Separation and Convergence with Kernel Discrepancies [61.973643031360254]
kernel-based discrepancy measures are required to (i) separate a target P from other probability measures or (ii) control weak convergence to P.
In this article we derive new sufficient and necessary conditions to ensure (i) and (ii)
For MMDs on separable metric spaces, we characterize those kernels that separate Bochner embeddable measures and introduce simple conditions for separating all measures with unbounded kernels.
arXiv Detail & Related papers (2022-09-26T16:41:16Z) - Nonlinear Independent Component Analysis for Continuous-Time Signals [85.59763606620938]
We study the classical problem of recovering a multidimensional source process from observations of mixtures of this process.
We show that this recovery is possible for many popular models of processes (up to order and monotone scaling of their coordinates) if the mixture is given by a sufficiently differentiable, invertible function.
arXiv Detail & Related papers (2021-02-04T20:28:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.