Estimation in Rotationally Invariant Generalized Linear Models via
Approximate Message Passing
- URL: http://arxiv.org/abs/2112.04330v1
- Date: Wed, 8 Dec 2021 15:20:04 GMT
- Title: Estimation in Rotationally Invariant Generalized Linear Models via
Approximate Message Passing
- Authors: Ramji Venkataramanan, Kevin K\"ogler, and Marco Mondelli
- Abstract summary: We propose a novel family of approximate message passing (AMP) algorithms for signal estimation.
We rigorously characterize their performance in the high-dimensional limit via a state evolution recursion.
- Score: 21.871513580418604
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of signal estimation in generalized linear models
defined via rotationally invariant design matrices. Since these matrices can
have an arbitrary spectral distribution, this model is well suited to capture
complex correlation structures which often arise in applications. We propose a
novel family of approximate message passing (AMP) algorithms for signal
estimation, and rigorously characterize their performance in the
high-dimensional limit via a state evolution recursion. Assuming knowledge of
the design matrix spectrum, our rotationally invariant AMP has complexity of
the same order as the existing AMP for Gaussian matrices; it also recovers the
existing AMP as a special case. Numerical results showcase a performance close
to Vector AMP (which is conjectured to be Bayes-optimal in some settings), but
obtained with a much lower complexity, as the proposed algorithm does not
require a computationally expensive singular value decomposition.
Related papers
- An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Spectral Estimators for Structured Generalized Linear Models via Approximate Message Passing [28.91482208876914]
We consider the problem of parameter estimation in a high-dimensional generalized linear model.
Despite their wide use, a rigorous performance characterization, as well as a principled way to preprocess the data, are available only for unstructured designs.
arXiv Detail & Related papers (2023-08-28T11:49:23Z) - Approximate Message Passing for the Matrix Tensor Product Model [8.206394018475708]
We propose and analyze an approximate message passing (AMP) algorithm for the matrix tensor product model.
Building upon an convergence theorem for non-separable functions, we prove a state evolution for non-separable functions.
We leverage this state evolution result to provide necessary and sufficient conditions for recovery of the signal of interest.
arXiv Detail & Related papers (2023-06-27T16:03:56Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Approximate Message Passing for Multi-Layer Estimation in Rotationally
Invariant Models [15.605031496980775]
We present a new class of approximate message passing (AMP) algorithms and give a state evolution recursion.
Our results show that this complexity gain comes at little to no cost in the performance of the algorithm.
arXiv Detail & Related papers (2022-12-03T08:10:35Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - High-Dimensional Sparse Bayesian Learning without Covariance Matrices [66.60078365202867]
We introduce a new inference scheme that avoids explicit construction of the covariance matrix.
Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm.
On several simulations, our method scales better than existing approaches in computation time and memory.
arXiv Detail & Related papers (2022-02-25T16:35:26Z) - Bilinear Classes: A Structural Framework for Provable Generalization in
RL [119.42509700822484]
Bilinear Classes is a new structural framework which permits generalization in reinforcement learning.
The framework incorporates nearly all existing models in which a sample complexity is achievable.
Our main result provides an RL algorithm which has sample complexity for Bilinear Classes.
arXiv Detail & Related papers (2021-03-19T16:34:20Z) - Memory Approximate Message Passing [9.116196799517262]
Approximate message passing (AMP) is a low-cost iterative parameter-estimation technique.
This paper proposes a low-complexity memory AMP (MAMP) for unitarily-invariant matrices.
arXiv Detail & Related papers (2020-12-20T07:42:15Z) - Approximate Message Passing with Spectral Initialization for Generalized
Linear Models [35.618694363241744]
We focus on estimators based on approximate message passing (AMP)
We propose an AMP algorithm with a spectral estimator.
We also provide numerical results that demonstrate the validity of the proposed approach.
arXiv Detail & Related papers (2020-10-07T14:52:35Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.