Approximate Message Passing for the Matrix Tensor Product Model
- URL: http://arxiv.org/abs/2306.15580v1
- Date: Tue, 27 Jun 2023 16:03:56 GMT
- Title: Approximate Message Passing for the Matrix Tensor Product Model
- Authors: Riccardo Rossetti, Galen Reeves
- Abstract summary: We propose and analyze an approximate message passing (AMP) algorithm for the matrix tensor product model.
Building upon an convergence theorem for non-separable functions, we prove a state evolution for non-separable functions.
We leverage this state evolution result to provide necessary and sufficient conditions for recovery of the signal of interest.
- Score: 8.206394018475708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose and analyze an approximate message passing (AMP) algorithm for the
matrix tensor product model, which is a generalization of the standard spiked
matrix models that allows for multiple types of pairwise observations over a
collection of latent variables. A key innovation for this algorithm is a method
for optimally weighing and combining multiple estimates in each iteration.
Building upon an AMP convergence theorem for non-separable functions, we prove
a state evolution for non-separable functions that provides an asymptotically
exact description of its performance in the high-dimensional limit. We leverage
this state evolution result to provide necessary and sufficient conditions for
recovery of the signal of interest. Such conditions depend on the singular
values of a linear operator derived from an appropriate generalization of a
signal-to-noise ratio for our model. Our results recover as special cases a
number of recently proposed methods for contextual models (e.g., covariate
assisted clustering) as well as inhomogeneous noise models.
Related papers
- Optimal thresholds and algorithms for a model of multi-modal learning in high dimensions [15.000720880773548]
The paper derives the approximate message passing (AMP) algorithm for this model and characterizes its performance in the high-dimensional limit.
The linearization of AMP is compared numerically to the widely used partial least squares (PLS) and canonical correlation analysis (CCA) methods.
arXiv Detail & Related papers (2024-07-03T21:48:23Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - A Random Matrix Approach to Low-Multilinear-Rank Tensor Approximation [24.558241146742205]
We characterize the large-dimensional spectral behavior of the unfoldings of the data tensor and exhibit relevant signal-to-noise ratios governing the detectability of the principal directions of the signal.
Results allow to accurately predict the reconstruction performance of truncated multilinear SVD (MLSVD) in the non-trivial regime.
arXiv Detail & Related papers (2024-02-05T16:38:30Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Variational Laplace Autoencoders [53.08170674326728]
Variational autoencoders employ an amortized inference model to approximate the posterior of latent variables.
We present a novel approach that addresses the limited posterior expressiveness of fully-factorized Gaussian assumption.
We also present a general framework named Variational Laplace Autoencoders (VLAEs) for training deep generative models.
arXiv Detail & Related papers (2022-11-30T18:59:27Z) - Generative Principal Component Analysis [47.03792476688768]
We study the problem of principal component analysis with generative modeling assumptions.
Key assumption is that the underlying signal lies near the range of an $L$-Lipschitz continuous generative model with bounded $k$-dimensional inputs.
We propose a quadratic estimator, and show that it enjoys a statistical rate of order $sqrtfracklog Lm$, where $m$ is the number of samples.
arXiv Detail & Related papers (2022-03-18T01:48:16Z) - Estimation in Rotationally Invariant Generalized Linear Models via
Approximate Message Passing [21.871513580418604]
We propose a novel family of approximate message passing (AMP) algorithms for signal estimation.
We rigorously characterize their performance in the high-dimensional limit via a state evolution recursion.
arXiv Detail & Related papers (2021-12-08T15:20:04Z) - Goal-directed Generation of Discrete Structures with Conditional
Generative Models [85.51463588099556]
We introduce a novel approach to directly optimize a reinforcement learning objective, maximizing an expected reward.
We test our methodology on two tasks: generating molecules with user-defined properties and identifying short python expressions which evaluate to a given target value.
arXiv Detail & Related papers (2020-10-05T20:03:13Z) - Slice Sampling for General Completely Random Measures [74.24975039689893]
We present a novel Markov chain Monte Carlo algorithm for posterior inference that adaptively sets the truncation level using auxiliary slice variables.
The efficacy of the proposed algorithm is evaluated on several popular nonparametric models.
arXiv Detail & Related papers (2020-06-24T17:53:53Z) - A Support Detection and Root Finding Approach for Learning
High-dimensional Generalized Linear Models [10.103666349083165]
We develop a support detection and root finding procedure to learn the high dimensional sparse generalized linear models.
We conduct simulations and real data analysis to illustrate the advantages of our proposed method over several existing methods.
arXiv Detail & Related papers (2020-01-16T14:35:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.