Approximate Message Passing with Spectral Initialization for Generalized
Linear Models
- URL: http://arxiv.org/abs/2010.03460v2
- Date: Wed, 17 Feb 2021 09:43:24 GMT
- Title: Approximate Message Passing with Spectral Initialization for Generalized
Linear Models
- Authors: Marco Mondelli and Ramji Venkataramanan
- Abstract summary: We focus on estimators based on approximate message passing (AMP)
We propose an AMP algorithm with a spectral estimator.
We also provide numerical results that demonstrate the validity of the proposed approach.
- Score: 35.618694363241744
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of estimating a signal from measurements obtained via
a generalized linear model. We focus on estimators based on approximate message
passing (AMP), a family of iterative algorithms with many appealing features:
the performance of AMP in the high-dimensional limit can be succinctly
characterized under suitable model assumptions; AMP can also be tailored to the
empirical distribution of the signal entries, and for a wide class of
estimation problems, AMP is conjectured to be optimal among all polynomial-time
algorithms.
However, a major issue of AMP is that in many models (such as phase
retrieval), it requires an initialization correlated with the ground-truth
signal and independent from the measurement matrix. Assuming that such an
initialization is available is typically not realistic. In this paper, we solve
this problem by proposing an AMP algorithm initialized with a spectral
estimator. With such an initialization, the standard AMP analysis fails since
the spectral estimator depends in a complicated way on the design matrix. Our
main contribution is a rigorous characterization of the performance of AMP with
spectral initialization in the high-dimensional limit. The key technical idea
is to define and analyze a two-phase artificial AMP algorithm that first
produces the spectral estimator, and then closely approximates the iterates of
the true AMP. We also provide numerical results that demonstrate the validity
of the proposed approach.
Related papers
- A non-asymptotic distributional theory of approximate message passing
for sparse and robust regression [20.830017611900832]
This paper develops non-asymptotic distributional characterizations for approximate message passing (AMP)
AMP is a family of iterative algorithms that prove effective as both fast estimators and powerful theoretical machinery.
arXiv Detail & Related papers (2024-01-08T14:34:35Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Approximate message passing from random initialization with applications
to $\mathbb{Z}_{2}$ synchronization [25.511475484340156]
This paper is concerned with the problem of reconstructing an unknown rank-one matrix with prior structural information from observations.
We provide the first non-asymotic characterization of Approximate Message Passing (AMP) in this model.
arXiv Detail & Related papers (2023-02-07T18:58:08Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - A Non-Asymptotic Framework for Approximate Message Passing in Spiked
Models [24.786030482013437]
Approximate message passing (AMP) emerges as an effective iterative paradigm for solving high-dimensional statistical problems.
Prior AMP theory fell short of predicting the AMP dynamics when the number of iterations surpasses $obig(fraclog nloglog nbig)$.
This paper develops a non-asymptotic framework for understanding AMP in spiked matrix estimation.
arXiv Detail & Related papers (2022-08-05T17:59:06Z) - Estimation in Rotationally Invariant Generalized Linear Models via
Approximate Message Passing [21.871513580418604]
We propose a novel family of approximate message passing (AMP) algorithms for signal estimation.
We rigorously characterize their performance in the high-dimensional limit via a state evolution recursion.
arXiv Detail & Related papers (2021-12-08T15:20:04Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - Instance-Optimal Compressed Sensing via Posterior Sampling [101.43899352984774]
We show for Gaussian measurements and emphany prior distribution on the signal, that the posterior sampling estimator achieves near-optimal recovery guarantees.
We implement the posterior sampling estimator for deep generative priors using Langevin dynamics, and empirically find that it produces accurate estimates with more diversity than MAP.
arXiv Detail & Related papers (2021-06-21T22:51:56Z) - PCA Initialization for Approximate Message Passing in Rotationally
Invariant Models [29.039655256171088]
Principal Component Analysis provides a natural estimator, and sharp results on its performance have been obtained in the high-dimensional regime.
Recently, an Approximate Message Passing (AMP) algorithm has been proposed as an alternative estimator with the potential to improve the accuracy of PCA.
In this work, we combine the two methods, initialize AMP with PCA, and propose a rigorous characterization of the performance of this estimator.
arXiv Detail & Related papers (2021-06-04T09:13:51Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.