Memory Approximate Message Passing
- URL: http://arxiv.org/abs/2106.02237v1
- Date: Fri, 4 Jun 2021 03:37:14 GMT
- Title: Memory Approximate Message Passing
- Authors: Lei Liu, Shunqi Huang, Brian M. Kurkoski
- Abstract summary: This paper proposes a memory AMP (MAMP), in which a long-memory matched filter is proposed for interference suppression.
A state evolution is derived to characterize the performance of MAMP.
For all right-unitarily-invariant matrices, the optimized MAMP converges to OAMP/VAMP, and thus is Bayes-optimal if it has a unique fixed point.
- Score: 9.116196799517262
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Approximate message passing (AMP) is a low-cost iterative
parameter-estimation technique for certain high-dimensional linear systems with
non-Gaussian distributions. However, AMP only applies to independent
identically distributed (IID) transform matrices, but may become unreliable for
other matrix ensembles, especially for ill-conditioned ones. To handle this
difficulty, orthogonal/vector AMP (OAMP/VAMP) was proposed for general
right-unitarily-invariant matrices. However, the Bayes-optimal OAMP/VAMP
requires high-complexity linear minimum mean square error estimator. To solve
the disadvantages of AMP and OAMP/VAMP, this paper proposes a memory AMP
(MAMP), in which a long-memory matched filter is proposed for interference
suppression. The complexity of MAMP is comparable to AMP. The asymptotic
Gaussianity of estimation errors in MAMP is guaranteed by the orthogonality
principle. A state evolution is derived to asymptotically characterize the
performance of MAMP. Based on the state evolution, the relaxation parameters
and damping vector in MAMP are optimized. For all right-unitarily-invariant
matrices, the optimized MAMP converges to OAMP/VAMP, and thus is Bayes-optimal
if it has a unique fixed point. Finally, simulations are provided to verify the
validity and accuracy of the theoretical results.
Related papers
- Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Optimization of Annealed Importance Sampling Hyperparameters [77.34726150561087]
Annealed Importance Sampling (AIS) is a popular algorithm used to estimates the intractable marginal likelihood of deep generative models.
We present a parameteric AIS process with flexible intermediary distributions and optimize the bridging distributions to use fewer number of steps for sampling.
We assess the performance of our optimized AIS for marginal likelihood estimation of deep generative models and compare it to other estimators.
arXiv Detail & Related papers (2022-09-27T07:58:25Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - Sufficient Statistic Memory Approximate Message Passing [5.708490209087275]
A key feature of the AMP-type algorithms is that their dynamics can be correctly described by state evolution.
This paper proposes a memory AMP (MAMP) under a sufficient statistic condition, named sufficient statistic MAMP (SS-MAMP)
arXiv Detail & Related papers (2022-06-23T13:06:00Z) - Sufficient-Statistic Memory AMP [12.579567275436343]
A key feature of the AMP-type algorithms is that their dynamics can be correctly described by state evolution.
This paper proposes a sufficient-statistic memory AMP (SS-MAMP) algorithm framework.
arXiv Detail & Related papers (2021-12-31T07:25:18Z) - Estimation in Rotationally Invariant Generalized Linear Models via
Approximate Message Passing [21.871513580418604]
We propose a novel family of approximate message passing (AMP) algorithms for signal estimation.
We rigorously characterize their performance in the high-dimensional limit via a state evolution recursion.
arXiv Detail & Related papers (2021-12-08T15:20:04Z) - Fast Batch Nuclear-norm Maximization and Minimization for Robust Domain
Adaptation [154.2195491708548]
We study the prediction discriminability and diversity by studying the structure of the classification output matrix of a randomly selected data batch.
We propose Batch Nuclear-norm Maximization and Minimization, which performs nuclear-norm on the target output matrix to enhance the target prediction ability.
Experiments show that our method could boost the adaptation accuracy and robustness under three typical domain adaptation scenarios.
arXiv Detail & Related papers (2021-07-13T15:08:32Z) - Memory Approximate Message Passing [9.116196799517262]
Approximate message passing (AMP) is a low-cost iterative parameter-estimation technique.
This paper proposes a low-complexity memory AMP (MAMP) for unitarily-invariant matrices.
arXiv Detail & Related papers (2020-12-20T07:42:15Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Approximate Message Passing with Spectral Initialization for Generalized
Linear Models [35.618694363241744]
We focus on estimators based on approximate message passing (AMP)
We propose an AMP algorithm with a spectral estimator.
We also provide numerical results that demonstrate the validity of the proposed approach.
arXiv Detail & Related papers (2020-10-07T14:52:35Z) - Modal Regression based Structured Low-rank Matrix Recovery for
Multi-view Learning [70.57193072829288]
Low-rank Multi-view Subspace Learning has shown great potential in cross-view classification in recent years.
Existing LMvSL based methods are incapable of well handling view discrepancy and discriminancy simultaneously.
We propose Structured Low-rank Matrix Recovery (SLMR), a unique method of effectively removing view discrepancy and improving discriminancy.
arXiv Detail & Related papers (2020-03-22T03:57:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.