A Robustness Analysis of Blind Source Separation
- URL: http://arxiv.org/abs/2303.10104v1
- Date: Fri, 17 Mar 2023 16:30:51 GMT
- Title: A Robustness Analysis of Blind Source Separation
- Authors: Alexander Schell
- Abstract summary: Blind source separation (BSS) aims to recover an unobserved signal from its mixture $X=f(S)$ under the condition that the transformation $f$ is invertible but unknown.
We present a general framework for analysing such violations and quantifying their impact on the blind recovery of $S$ from $X$.
We show that a generic BSS-solution in response to general deviations from its defining structural assumptions can be profitably analysed in the form of explicit continuity guarantees.
- Score: 91.3755431537592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Blind source separation (BSS) aims to recover an unobserved signal $S$ from
its mixture $X=f(S)$ under the condition that the effecting transformation $f$
is invertible but unknown. As this is a basic problem with many practical
applications, a fundamental issue is to understand how the solutions to this
problem behave when their supporting statistical prior assumptions are
violated. In the classical context of linear mixtures, we present a general
framework for analysing such violations and quantifying their impact on the
blind recovery of $S$ from $X$. Modelling $S$ as a multidimensional stochastic
process, we introduce an informative topology on the space of possible causes
underlying a mixture $X$, and show that the behaviour of a generic BSS-solution
in response to general deviations from its defining structural assumptions can
be profitably analysed in the form of explicit continuity guarantees with
respect to this topology. This allows for a flexible and convenient
quantification of general model uncertainty scenarios and amounts to the first
comprehensive robustness framework for BSS. Our approach is entirely
constructive, and we demonstrate its utility with novel theoretical guarantees
for a number of statistical applications.
Related papers
- A Stochastic Dynamical Theory of LLM Self-Adversariality: Modeling Severity Drift as a Critical Process [0.0]
This paper introduces a continuous-time dynamical framework for understanding how large language models (LLMs) may self-amplify latent biases or toxicity through their own chain-of-thought reasoning.
The model posits an instantaneous "severity" variable $x(t) in [0,1]$ evolving under a differential equation (SDE) with a drift term $mu(x)$ and diffusion $sigma(x)$.
arXiv Detail & Related papers (2025-01-28T08:08:25Z) - Benign Overfitting in Out-of-Distribution Generalization of Linear Models [19.203753135860016]
We take an initial step towards understanding benign overfitting in the Out-of-Distribution (OOD) regime.
We provide non-asymptotic guarantees proving that benign overfitting occurs in standard ridge regression.
We also present theoretical results for a more general family of target covariance matrix.
arXiv Detail & Related papers (2024-12-19T02:47:39Z) - ProDepth: Boosting Self-Supervised Multi-Frame Monocular Depth with Probabilistic Fusion [17.448021191744285]
Multi-frame monocular depth estimation relies on the geometric consistency between successive frames under the assumption of a static scene.
The presence of moving objects in dynamic scenes introduces inevitable inconsistencies, causing misaligned multi-frame feature matching and misleading self-supervision during training.
We propose a novel framework called ProDepth, which effectively addresses the mismatch problem caused by dynamic objects using a probabilistic approach.
arXiv Detail & Related papers (2024-07-12T14:37:49Z) - Equivalence of the Empirical Risk Minimization to Regularization on the Family of f-Divergences [45.935798913942904]
The solution to empirical risk minimization with $f$-divergence regularization (ERM-$f$DR) is presented.
Examples of the solution for particular choices of the function $f$ are presented.
arXiv Detail & Related papers (2024-02-01T11:12:00Z) - A Relational Intervention Approach for Unsupervised Dynamics
Generalization in Model-Based Reinforcement Learning [113.75991721607174]
We introduce an interventional prediction module to estimate the probability of two estimated $hatz_i, hatz_j$ belonging to the same environment.
We empirically show that $hatZ$ estimated by our method enjoy less redundant information than previous methods.
arXiv Detail & Related papers (2022-06-09T15:01:36Z) - On the Pitfalls of Heteroscedastic Uncertainty Estimation with
Probabilistic Neural Networks [23.502721524477444]
We present a synthetic example illustrating how this approach can lead to very poor but stable estimates.
We identify the culprit to be the log-likelihood loss, along with certain conditions that exacerbate the issue.
We present an alternative formulation, termed $beta$-NLL, in which each data point's contribution to the loss is weighted by the $beta$-exponentiated variance estimate.
arXiv Detail & Related papers (2022-03-17T08:46:17Z) - Benign Underfitting of Stochastic Gradient Descent [72.38051710389732]
We study to what extent may gradient descent (SGD) be understood as a "conventional" learning rule that achieves generalization performance by obtaining a good fit training data.
We analyze the closely related with-replacement SGD, for which an analogous phenomenon does not occur and prove that its population risk does in fact converge at the optimal rate.
arXiv Detail & Related papers (2022-02-27T13:25:01Z) - CC-Cert: A Probabilistic Approach to Certify General Robustness of
Neural Networks [58.29502185344086]
In safety-critical machine learning applications, it is crucial to defend models against adversarial attacks.
It is important to provide provable guarantees for deep learning models against semantically meaningful input transformations.
We propose a new universal probabilistic certification approach based on Chernoff-Cramer bounds.
arXiv Detail & Related papers (2021-09-22T12:46:04Z) - A general sample complexity analysis of vanilla policy gradient [101.16957584135767]
Policy gradient (PG) is one of the most popular reinforcement learning (RL) problems.
"vanilla" theoretical understanding of PG trajectory is one of the most popular methods for solving RL problems.
arXiv Detail & Related papers (2021-07-23T19:38:17Z) - Posterior Differential Regularization with f-divergence for Improving
Model Robustness [95.05725916287376]
We focus on methods that regularize the model posterior difference between clean and noisy inputs.
We generalize the posterior differential regularization to the family of $f$-divergences.
Our experiments show that regularizing the posterior differential with $f$-divergence can result in well-improved model robustness.
arXiv Detail & Related papers (2020-10-23T19:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.