Approximate Message Passing for Multi-Layer Estimation in Rotationally
Invariant Models
- URL: http://arxiv.org/abs/2212.01572v1
- Date: Sat, 3 Dec 2022 08:10:35 GMT
- Title: Approximate Message Passing for Multi-Layer Estimation in Rotationally
Invariant Models
- Authors: Yizhou Xu, TianQi Hou, ShanSuo Liang and Marco Mondelli
- Abstract summary: We present a new class of approximate message passing (AMP) algorithms and give a state evolution recursion.
Our results show that this complexity gain comes at little to no cost in the performance of the algorithm.
- Score: 15.605031496980775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of reconstructing the signal and the hidden variables
from observations coming from a multi-layer network with rotationally invariant
weight matrices. The multi-layer structure models inference from deep
generative priors, and the rotational invariance imposed on the weights
generalizes the i.i.d.\ Gaussian assumption by allowing for a complex
correlation structure, which is typical in applications. In this work, we
present a new class of approximate message passing (AMP) algorithms and give a
state evolution recursion which precisely characterizes their performance in
the large system limit. In contrast with the existing multi-layer VAMP
(ML-VAMP) approach, our proposed AMP -- dubbed multi-layer rotationally
invariant generalized AMP (ML-RI-GAMP) -- provides a natural generalization
beyond Gaussian designs, in the sense that it recovers the existing Gaussian
AMP as a special case. Furthermore, ML-RI-GAMP exhibits a significantly lower
complexity than ML-VAMP, as the computationally intensive singular value
decomposition is replaced by an estimation of the moments of the design
matrices. Finally, our numerical results show that this complexity gain comes
at little to no cost in the performance of the algorithm.
Related papers
- An Efficient Algorithm for Clustered Multi-Task Compressive Sensing [60.70532293880842]
Clustered multi-task compressive sensing is a hierarchical model that solves multiple compressive sensing tasks.
The existing inference algorithm for this model is computationally expensive and does not scale well in high dimensions.
We propose a new algorithm that substantially accelerates model inference by avoiding the need to explicitly compute these covariance matrices.
arXiv Detail & Related papers (2023-09-30T15:57:14Z) - Approximate Message Passing for the Matrix Tensor Product Model [8.206394018475708]
We propose and analyze an approximate message passing (AMP) algorithm for the matrix tensor product model.
Building upon an convergence theorem for non-separable functions, we prove a state evolution for non-separable functions.
We leverage this state evolution result to provide necessary and sufficient conditions for recovery of the signal of interest.
arXiv Detail & Related papers (2023-06-27T16:03:56Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Generalized Fast Multichannel Nonnegative Matrix Factorization Based on
Gaussian Scale Mixtures for Blind Source Separation [3.141085922386211]
This paper describes heavy-tailed extensions of a versatile blind source separation method called FastMNMF.
We develop an expectationmaximization algorithm that works even when the probability density function of the impulse variables have no analytical expressions.
arXiv Detail & Related papers (2022-05-11T08:09:39Z) - Sampling Approximately Low-Rank Ising Models: MCMC meets Variational
Methods [35.24886589614034]
We consider quadratic definite Ising models on the hypercube with a general interaction $J$.
Our general result implies the first time sampling algorithms for low-rank Ising models.
arXiv Detail & Related papers (2022-02-17T21:43:50Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Estimation in Rotationally Invariant Generalized Linear Models via
Approximate Message Passing [21.871513580418604]
We propose a novel family of approximate message passing (AMP) algorithms for signal estimation.
We rigorously characterize their performance in the high-dimensional limit via a state evolution recursion.
arXiv Detail & Related papers (2021-12-08T15:20:04Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Reinforcement Learning for Adaptive Mesh Refinement [63.7867809197671]
We propose a novel formulation of AMR as a Markov decision process and apply deep reinforcement learning to train refinement policies directly from simulation.
The model sizes of these policy architectures are independent of the mesh size and hence scale to arbitrarily large and complex simulations.
arXiv Detail & Related papers (2021-03-01T22:55:48Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.