Nonlinear Random Matrices and Applications to the Sum of Squares
Hierarchy
- URL: http://arxiv.org/abs/2302.04462v1
- Date: Thu, 9 Feb 2023 06:52:03 GMT
- Title: Nonlinear Random Matrices and Applications to the Sum of Squares
Hierarchy
- Authors: Goutham Rajendran
- Abstract summary: We develop new tools in the theory of nonlinear random matrices.
We apply them to study the performance of the Sum of Squares hierarchy on average-case problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop new tools in the theory of nonlinear random matrices and apply
them to study the performance of the Sum of Squares (SoS) hierarchy on
average-case problems.
The SoS hierarchy is a powerful optimization technique that has achieved
tremendous success for various problems in combinatorial optimization, robust
statistics and machine learning. It's a family of convex relaxations that lets
us smoothly trade off running time for approximation guarantees. In recent
works, it's been shown to be extremely useful for recovering structure in high
dimensional noisy data. It also remains our best approach towards refuting the
notorious Unique Games Conjecture.
In this work, we analyze the performance of the SoS hierarchy on fundamental
problems stemming from statistics, theoretical computer science and statistical
physics. In particular, we show subexponential-time SoS lower bounds for the
problems of the Sherrington-Kirkpatrick Hamiltonian, Planted Slightly Denser
Subgraph, Tensor Principal Components Analysis and Sparse Principal Components
Analysis. These SoS lower bounds involve analyzing large random matrices,
wherein lie our main contributions. These results offer strong evidence for the
truth of and insight into the low-degree likelihood ratio hypothesis, an
important conjecture that predicts the power of bounded-time algorithms for
hypothesis testing.
We also develop general-purpose tools for analyzing the behavior of random
matrices which are functions of independent random variables. Towards this, we
build on and generalize the matrix variant of the Efron-Stein inequalities. In
particular, our general theorem on matrix concentration recovers various
results that have appeared in the literature. We expect these random matrix
theory ideas to have other significant applications.
Related papers
- Statistics of the Random Matrix Spectral Form Factor [0.0]
We identify the form factor statistics to next leading order in a $D-1$ expansion.
Our findings fully agree with numerics.
They are presented in a pedagogical way, highlighting new pathways in the study of statistical signatures at next leading order.
arXiv Detail & Related papers (2025-03-27T11:34:12Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - Concentration of polynomial random matrices via Efron-Stein inequalities [0.3451964963586458]
In many applications, we need to analyze random matrices whose entries are scalars in the variables.
We present a general framework to obtain such bounds, based on the matrix Efron-Stein inequality developed by Paulin-Mackey-Tropp.
We derive bounds for "sparse graph matrices", which were obtained only recently by Jones et al. [FOCS 2021] using a nontrivial application of the trace power method.
arXiv Detail & Related papers (2022-09-06T17:12:30Z) - When Random Tensors meet Random Matrices [50.568841545067144]
This paper studies asymmetric order-$d$ spiked tensor models with Gaussian noise.
We show that the analysis of the considered model boils down to the analysis of an equivalent spiked symmetric textitblock-wise random matrix.
arXiv Detail & Related papers (2021-12-23T04:05:01Z) - Test Set Sizing Via Random Matrix Theory [91.3755431537592]
This paper uses techniques from Random Matrix Theory to find the ideal training-testing data split for a simple linear regression.
It defines "ideal" as satisfying the integrity metric, i.e. the empirical model error is the actual measurement noise.
This paper is the first to solve for the training and test size for any model in a way that is truly optimal.
arXiv Detail & Related papers (2021-12-11T13:18:33Z) - Statistical limits of dictionary learning: random matrix theory and the
spectral replica method [28.54289139061295]
We consider increasingly complex models of matrix denoising and dictionary learning in the Bayes-optimal setting.
We introduce a novel combination of the replica method from statistical mechanics together with random matrix theory, coined spectral replica method.
arXiv Detail & Related papers (2021-09-14T12:02:32Z) - Optimization and Sampling Under Continuous Symmetry: Examples and Lie
Theory [26.555110725656963]
We show examples of Lieant's theorem, Lie groups, Lie algebras, and the Harish-Chandra--Itzyintegrals formulas.
We then present an introduction to optimization theory -- an indispensable mathematical toolkit for capturing continuous symmetries.
arXiv Detail & Related papers (2021-09-02T16:44:44Z) - Robust Compressed Sensing using Generative Models [98.64228459705859]
In this paper we propose an algorithm inspired by the Median-of-Means (MOM)
Our algorithm guarantees recovery for heavy-tailed data, even in the presence of outliers.
arXiv Detail & Related papers (2020-06-16T19:07:41Z) - Robust Matrix Completion with Mixed Data Types [0.0]
We consider the problem of recovering a structured low rank matrix with partially observed entries with mixed data types.
Most approaches assume that there is only one underlying distribution and the low rank constraint is regularized by the matrix Schatten Norm.
We propose a computationally feasible statistical approach with strong recovery guarantees along with an algorithmic framework suited for parallelization to recover a low rank matrix with partially observed entries for mixed data types in one step.
arXiv Detail & Related papers (2020-05-25T21:35:10Z) - Optimal Iterative Sketching with the Subsampled Randomized Hadamard
Transform [64.90148466525754]
We study the performance of iterative sketching for least-squares problems.
We show that the convergence rate for Haar and randomized Hadamard matrices are identical, andally improve upon random projections.
These techniques may be applied to other algorithms that employ randomized dimension reduction.
arXiv Detail & Related papers (2020-02-03T16:17:50Z) - Bridging Convex and Nonconvex Optimization in Robust PCA: Noise,
Outliers, and Missing Data [20.31071912733189]
This paper delivers improved theoretical guarantees for the convex programming approach in low-rank matrix estimation.
We show that a principled convex program achieves near-optimal statistical accuracy, in terms of both the Euclidean loss and the $ell_infty$ loss.
arXiv Detail & Related papers (2020-01-15T18:54:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.