Bayes-optimal limits in structured PCA, and how to reach them
- URL: http://arxiv.org/abs/2210.01237v2
- Date: Fri, 2 Jun 2023 13:08:45 GMT
- Title: Bayes-optimal limits in structured PCA, and how to reach them
- Authors: Jean Barbier, Francesco Camilli, Marco Mondelli and Manuel Saenz
- Abstract summary: We study the paradigmatic matrix model of principal components analysis (PCA), where a rank-one matrix is corrupted by additive noise.
We provide the first characterization of the Bayes-optimal limits of inference in this model.
We propose a novel approximate message passing algorithm (AMP), inspired by the theory of Adaptive Thouless-Anderson-Palmer equations.
- Score: 21.3083877172595
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How do statistical dependencies in measurement noise influence
high-dimensional inference? To answer this, we study the paradigmatic spiked
matrix model of principal components analysis (PCA), where a rank-one matrix is
corrupted by additive noise. We go beyond the usual independence assumption on
the noise entries, by drawing the noise from a low-order polynomial orthogonal
matrix ensemble. The resulting noise correlations make the setting relevant for
applications but analytically challenging. We provide the first
characterization of the Bayes-optimal limits of inference in this model. If the
spike is rotation-invariant, we show that standard spectral PCA is optimal.
However, for more general priors, both PCA and the existing approximate message
passing algorithm (AMP) fall short of achieving the information-theoretic
limits, which we compute using the replica method from statistical mechanics.
We thus propose a novel AMP, inspired by the theory of Adaptive
Thouless-Anderson-Palmer equations, which saturates the theoretical limit. This
AMP comes with a rigorous state evolution analysis tracking its performance.
Although we focus on specific noise distributions, our methodology can be
generalized to a wide class of trace matrix ensembles at the cost of more
involved expressions. Finally, despite the seemingly strong assumption of
rotation-invariant noise, our theory empirically predicts algorithmic
performance on real data, pointing at remarkable universality properties.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Information limits and Thouless-Anderson-Palmer equations for spiked matrix models with structured noise [19.496063739638924]
We consider a saturate problem of Bayesian inference for a structured spiked model.
We show how to predict the statistical limits using an efficient algorithm inspired by the theory of adaptive Thouless-Anderson-Palmer equations.
arXiv Detail & Related papers (2024-05-31T16:38:35Z) - Support Recovery in Sparse PCA with Non-Random Missing Data [27.3669650952144]
We analyze a practical algorithm for sparse PCA on incomplete and noisy data under a general non-random sampling scheme.
We provide theoretical justification that under certain conditions, we can recover the support of the sparse leading eigenvector with high probability.
We show that our algorithm outperforms several other sparse PCA approaches especially when the observed entries have good structural properties.
arXiv Detail & Related papers (2023-02-03T04:20:25Z) - Optimizing the Noise in Self-Supervised Learning: from Importance
Sampling to Noise-Contrastive Estimation [80.07065346699005]
It is widely assumed that the optimal noise distribution should be made equal to the data distribution, as in Generative Adversarial Networks (GANs)
We turn to Noise-Contrastive Estimation which grounds this self-supervised task as an estimation problem of an energy-based model of the data.
We soberly conclude that the optimal noise may be hard to sample from, and the gain in efficiency can be modest compared to choosing the noise distribution equal to the data's.
arXiv Detail & Related papers (2023-01-23T19:57:58Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Robust Matrix Completion with Heavy-tailed Noise [0.5837881923712392]
This paper studies low-rank matrix completion in the presence of heavy-tailed possibly asymmetric noise.
In this paper, we adopt adaptive Huber loss accommodate heavy-tailed noise, which is robust against large and possibly asymmetric errors.
We prove that under merely a second moment condition on the error, the Euclidean error falls geometrically fast until achieving a minimax-optimal statistical estimation error.
arXiv Detail & Related papers (2022-06-09T04:48:48Z) - The Optimal Noise in Noise-Contrastive Learning Is Not What You Think [80.07065346699005]
We show that deviating from this assumption can actually lead to better statistical estimators.
In particular, the optimal noise distribution is different from the data's and even from a different family.
arXiv Detail & Related papers (2022-03-02T13:59:20Z) - Optimizing Information-theoretical Generalization Bounds via Anisotropic
Noise in SGLD [73.55632827932101]
We optimize the information-theoretical generalization bound by manipulating the noise structure in SGLD.
We prove that with constraint to guarantee low empirical risk, the optimal noise covariance is the square root of the expected gradient covariance.
arXiv Detail & Related papers (2021-10-26T15:02:27Z) - Adaptive Multi-View ICA: Estimation of noise levels for optimal
inference [65.94843987207445]
Adaptive multiView ICA (AVICA) is a noisy ICA model where each view is a linear mixture of shared independent sources with additive noise on the sources.
On synthetic data, AVICA yields better sources estimates than other group ICA methods thanks to its explicit MMSE estimator.
On real magnetoencephalograpy (MEG) data, we provide evidence that the decomposition is less sensitive to sampling noise and that the noise variance estimates are biologically plausible.
arXiv Detail & Related papers (2021-02-22T13:10:12Z) - Beyond variance reduction: Understanding the true impact of baselines on
policy optimization [24.09670734037029]
We show that learning dynamics are governed by the curvature of the loss function and the noise of the gradient estimates.
We present theoretical results showing that, at least for bandit problems, curvature and noise are not sufficient to explain the learning dynamics.
arXiv Detail & Related papers (2020-08-31T17:52:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.