Nonasymptotic Guarantees for Spiked Matrix Recovery with Generative
Priors
- URL: http://arxiv.org/abs/2006.07953v2
- Date: Mon, 9 Nov 2020 05:08:16 GMT
- Title: Nonasymptotic Guarantees for Spiked Matrix Recovery with Generative
Priors
- Authors: Jorio Cocola, Paul Hand, Vladislav Voroninski
- Abstract summary: We study an alternative prior where the low-rank component is in the range of a trained generative network.
We establish a favorable global optimization landscape for a nonlinear least squares objective.
This result suggests that generative priors have no computational-to-statistical gap for structured rank-one matrix recovery.
- Score: 12.798687476698063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many problems in statistics and machine learning require the reconstruction
of a rank-one signal matrix from noisy data. Enforcing additional prior
information on the rank-one component is often key to guaranteeing good
recovery performance. One such prior on the low-rank component is sparsity,
giving rise to the sparse principal component analysis problem. Unfortunately,
there is strong evidence that this problem suffers from a
computational-to-statistical gap, which may be fundamental. In this work, we
study an alternative prior where the low-rank component is in the range of a
trained generative network. We provide a non-asymptotic analysis with optimal
sample complexity, up to logarithmic factors, for rank-one matrix recovery
under an expansive-Gaussian network prior. Specifically, we establish a
favorable global optimization landscape for a nonlinear least squares
objective, provided the number of samples is on the order of the dimensionality
of the input to the generative model. This result suggests that generative
priors have no computational-to-statistical gap for structured rank-one matrix
recovery in the finite data, nonasymptotic regime. We present this analysis in
the case of both the Wishart and Wigner spiked matrix models.
Related papers
- Exact Recovery Guarantees for Parameterized Non-linear System Identification Problem under Adversarial Attacks [16.705631360131886]
We study the system identification problem for parameterized non-linear systems using basis functions under adversarial attacks.
Motivated by the LASSO-type estimators, we analyze the exact recovery property of a non-smooth estimator.
This is the first study on the sample complexity analysis of a non-smooth estimator for the non-linear system identification problem.
arXiv Detail & Related papers (2024-08-30T22:12:57Z) - Leave-One-Out Analysis for Nonconvex Robust Matrix Completion with General Thresholding Functions [7.697455644733554]
We rank the problem of robust completion matrix (RMC)
We consider a simple yet efficient algorithm for the analysis.
To the best sampling result, this is the first rank-out analysis method.
arXiv Detail & Related papers (2024-07-28T09:47:36Z) - Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement
Learning [53.445068584013896]
We study matrix estimation problems arising in reinforcement learning (RL) with low-rank structure.
In low-rank bandits, the matrix to be recovered specifies the expected arm rewards, and for low-rank Markov Decision Processes (MDPs), it may for example characterize the transition kernel of the MDP.
We show that simple spectral-based matrix estimation approaches efficiently recover the singular subspaces of the matrix and exhibit nearly-minimal entry-wise error.
arXiv Detail & Related papers (2023-10-10T17:06:41Z) - The Decimation Scheme for Symmetric Matrix Factorization [0.0]
Matrix factorization is an inference problem that has acquired importance due to its vast range of applications.
We study this extensive rank problem, extending the alternative 'decimation' procedure that we recently introduced.
We introduce a simple algorithm based on a ground state search that implements decimation and performs matrix factorization.
arXiv Detail & Related papers (2023-07-31T10:53:45Z) - Deep Unrolling for Nonconvex Robust Principal Component Analysis [75.32013242448151]
We design algorithms for Robust Component Analysis (A)
It consists in decomposing a matrix into the sum of a low Principaled matrix and a sparse Principaled matrix.
arXiv Detail & Related papers (2023-07-12T03:48:26Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Sparse PCA via $l_{2,p}$-Norm Regularization for Unsupervised Feature
Selection [138.97647716793333]
We propose a simple and efficient unsupervised feature selection method, by combining reconstruction error with $l_2,p$-norm regularization.
We present an efficient optimization algorithm to solve the proposed unsupervised model, and analyse the convergence and computational complexity of the algorithm theoretically.
arXiv Detail & Related papers (2020-12-29T04:08:38Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - A Scalable, Adaptive and Sound Nonconvex Regularizer for Low-rank Matrix
Completion [60.52730146391456]
We propose a new non scalable low-rank regularizer called "nuclear Frobenius norm" regularizer, which is adaptive and sound.
It bypasses the computation of singular values and allows fast optimization by algorithms.
It obtains state-of-the-art recovery performance while being the fastest in existing matrix learning methods.
arXiv Detail & Related papers (2020-08-14T18:47:58Z) - Robust Matrix Completion with Mixed Data Types [0.0]
We consider the problem of recovering a structured low rank matrix with partially observed entries with mixed data types.
Most approaches assume that there is only one underlying distribution and the low rank constraint is regularized by the matrix Schatten Norm.
We propose a computationally feasible statistical approach with strong recovery guarantees along with an algorithmic framework suited for parallelization to recover a low rank matrix with partially observed entries for mixed data types in one step.
arXiv Detail & Related papers (2020-05-25T21:35:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.