Approximate message passing from random initialization with applications
to $\mathbb{Z}_{2}$ synchronization
- URL: http://arxiv.org/abs/2302.03682v1
- Date: Tue, 7 Feb 2023 18:58:08 GMT
- Title: Approximate message passing from random initialization with applications
to $\mathbb{Z}_{2}$ synchronization
- Authors: Gen Li, Wei Fan, Yuting Wei
- Abstract summary: This paper is concerned with the problem of reconstructing an unknown rank-one matrix with prior structural information from observations.
We provide the first non-asymotic characterization of Approximate Message Passing (AMP) in this model.
- Score: 25.511475484340156
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper is concerned with the problem of reconstructing an unknown
rank-one matrix with prior structural information from noisy observations.
While computing the Bayes-optimal estimator seems intractable in general due to
its nonconvex nature, Approximate Message Passing (AMP) emerges as an efficient
first-order method to approximate the Bayes-optimal estimator. However, the
theoretical underpinnings of AMP remain largely unavailable when it starts from
random initialization, a scheme of critical practical utility. Focusing on a
prototypical model called $\mathbb{Z}_{2}$ synchronization, we characterize the
finite-sample dynamics of AMP from random initialization, uncovering its rapid
global convergence. Our theory provides the first non-asymptotic
characterization of AMP in this model without requiring either an informative
initialization (e.g., spectral initialization) or sample splitting.
Related papers
- Unrolled denoising networks provably learn optimal Bayesian inference [54.79172096306631]
We prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP)
For compressed sensing, we prove that when trained on data drawn from a product prior, the layers of the network converge to the same denoisers used in Bayes AMP.
arXiv Detail & Related papers (2024-09-19T17:56:16Z) - A non-asymptotic distributional theory of approximate message passing
for sparse and robust regression [20.830017611900832]
This paper develops non-asymptotic distributional characterizations for approximate message passing (AMP)
AMP is a family of iterative algorithms that prove effective as both fast estimators and powerful theoretical machinery.
arXiv Detail & Related papers (2024-01-08T14:34:35Z) - Restoration-Degradation Beyond Linear Diffusions: A Non-Asymptotic
Analysis For DDIM-Type Samplers [90.45898746733397]
We develop a framework for non-asymptotic analysis of deterministic samplers used for diffusion generative modeling.
We show that one step along the probability flow ODE can be expressed as two steps: 1) a restoration step that runs ascent on the conditional log-likelihood at some infinitesimally previous time, and 2) a degradation step that runs the forward process using noise pointing back towards the current gradient.
arXiv Detail & Related papers (2023-03-06T18:59:19Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - A Non-Asymptotic Framework for Approximate Message Passing in Spiked
Models [24.786030482013437]
Approximate message passing (AMP) emerges as an effective iterative paradigm for solving high-dimensional statistical problems.
Prior AMP theory fell short of predicting the AMP dynamics when the number of iterations surpasses $obig(fraclog nloglog nbig)$.
This paper develops a non-asymptotic framework for understanding AMP in spiked matrix estimation.
arXiv Detail & Related papers (2022-08-05T17:59:06Z) - Towards Sample-Optimal Compressive Phase Retrieval with Sparse and
Generative Priors [59.33977545294148]
We show that $O(k log L)$ samples suffice to guarantee that the signal is close to any vector that minimizes an amplitude-based empirical loss function.
We adapt this result to sparse phase retrieval, and show that $O(s log n)$ samples are sufficient for a similar guarantee when the underlying signal is $s$-sparse and $n$-dimensional.
arXiv Detail & Related papers (2021-06-29T12:49:54Z) - PCA Initialization for Approximate Message Passing in Rotationally
Invariant Models [29.039655256171088]
Principal Component Analysis provides a natural estimator, and sharp results on its performance have been obtained in the high-dimensional regime.
Recently, an Approximate Message Passing (AMP) algorithm has been proposed as an alternative estimator with the potential to improve the accuracy of PCA.
In this work, we combine the two methods, initialize AMP with PCA, and propose a rigorous characterization of the performance of this estimator.
arXiv Detail & Related papers (2021-06-04T09:13:51Z) - On Stochastic Moving-Average Estimators for Non-Convex Optimization [105.22760323075008]
In this paper, we demonstrate the power of a widely used estimator based on moving average (SEMA) problems.
For all these-the-art results, we also present the results for all these-the-art problems.
arXiv Detail & Related papers (2021-04-30T08:50:24Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Approximate Message Passing with Spectral Initialization for Generalized
Linear Models [35.618694363241744]
We focus on estimators based on approximate message passing (AMP)
We propose an AMP algorithm with a spectral estimator.
We also provide numerical results that demonstrate the validity of the proposed approach.
arXiv Detail & Related papers (2020-10-07T14:52:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.