Provable Low Rank Plus Sparse Matrix Separation Via Nonconvex
Regularizers
- URL: http://arxiv.org/abs/2109.12713v1
- Date: Sun, 26 Sep 2021 22:09:42 GMT
- Title: Provable Low Rank Plus Sparse Matrix Separation Via Nonconvex
Regularizers
- Authors: April Sagan, John E. Mitchell
- Abstract summary: This paper considers a large problem where we seek to recover a low rank matrix/or sparse vector from some set of measurements.
While methods based on convex bias estimators suffer from bias the rank or sparsity to be known as a priori, we use non regularizers.
We present a novel analysis of the proximal alternating bias descent algorithm applied to such problems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper considers a large class of problems where we seek to recover a low
rank matrix and/or sparse vector from some set of measurements. While methods
based on convex relaxations suffer from a (possibly large) estimator bias, and
other nonconvex methods require the rank or sparsity to be known a priori, we
use nonconvex regularizers to minimize the rank and $l_0$ norm without the
estimator bias from the convex relaxation. We present a novel analysis of the
alternating proximal gradient descent algorithm applied to such problems, and
bound the error between the iterates and the ground truth sparse and low rank
matrices. The algorithm and error bound can be applied to sparse optimization,
matrix completion, and robust principal component analysis as special cases of
our results.
Related papers
- Leave-One-Out Analysis for Nonconvex Robust Matrix Completion with General Thresholding Functions [7.697455644733554]
We rank the problem of robust completion matrix (RMC)
We consider a simple yet efficient algorithm for the analysis.
To the best sampling result, this is the first rank-out analysis method.
arXiv Detail & Related papers (2024-07-28T09:47:36Z) - Entrywise error bounds for low-rank approximations of kernel matrices [55.524284152242096]
We derive entrywise error bounds for low-rank approximations of kernel matrices obtained using the truncated eigen-decomposition.
A key technical innovation is a delocalisation result for the eigenvectors of the kernel matrix corresponding to small eigenvalues.
We validate our theory with an empirical study of a collection of synthetic and real-world datasets.
arXiv Detail & Related papers (2024-05-23T12:26:25Z) - Riemannian stochastic optimization methods avoid strict saddle points [68.80251170757647]
We show that policies under study avoid strict saddle points / submanifolds with probability 1.
This result provides an important sanity check as it shows that, almost always, the limit state of an algorithm can only be a local minimizer.
arXiv Detail & Related papers (2023-11-04T11:12:24Z) - The Inductive Bias of Flatness Regularization for Deep Matrix
Factorization [58.851514333119255]
This work takes the first step toward understanding the inductive bias of the minimum trace of the Hessian solutions in deep linear networks.
We show that for all depth greater than one, with the standard Isometry Property (RIP) on the measurements, minimizing the trace of Hessian is approximately equivalent to minimizing the Schatten 1-norm of the corresponding end-to-end matrix parameters.
arXiv Detail & Related papers (2023-06-22T23:14:57Z) - A Validation Approach to Over-parameterized Matrix and Image Recovery [29.29430943998287]
We consider the problem of recovering a low-rank matrix from several random linear measurements.
We show that the proposed validation approach can also be efficiently used for image prior, an an image with a deep network.
arXiv Detail & Related papers (2022-09-21T22:01:23Z) - On Stochastic Moving-Average Estimators for Non-Convex Optimization [105.22760323075008]
In this paper, we demonstrate the power of a widely used estimator based on moving average (SEMA) problems.
For all these-the-art results, we also present the results for all these-the-art problems.
arXiv Detail & Related papers (2021-04-30T08:50:24Z) - Low-Rank Matrix Recovery with Scaled Subgradient Methods: Fast and
Robust Convergence Without the Condition Number [34.0533596121548]
Many problems in data science can be treated as estimating a low-rank from highly incomplete, sometimes even corrupted, observations.
One popular approach is to resort to matrix factorization, where the low-rank matrix factors are optimized via first-order methods over a smooth loss.
arXiv Detail & Related papers (2020-10-26T06:21:14Z) - Robust Low-rank Matrix Completion via an Alternating Manifold Proximal
Gradient Continuation Method [47.80060761046752]
Robust low-rank matrix completion (RMC) has been studied extensively for computer vision, signal processing and machine learning applications.
This problem aims to decompose a partially observed matrix into the superposition of a low-rank matrix and a sparse matrix, where the sparse matrix captures the grossly corrupted entries of the matrix.
A widely used approach to tackle RMC is to consider a convex formulation, which minimizes the nuclear norm of the low-rank matrix (to promote low-rankness) and the l1 norm of the sparse matrix (to promote sparsity).
In this paper, motivated by some recent works on low-
arXiv Detail & Related papers (2020-08-18T04:46:22Z) - Accelerating Ill-Conditioned Low-Rank Matrix Estimation via Scaled
Gradient Descent [34.0533596121548]
Low-rank matrix estimation converges convex problem that finds numerous applications in signal processing, machine learning and imaging science.
We show that ScaledGD achieves the best of the best in terms of the number of the low-rank matrix.
Our analysis is also applicable to general loss that are similar to low-rank gradient descent.
arXiv Detail & Related papers (2020-05-18T17:17:16Z) - Optimal Randomized First-Order Methods for Least-Squares Problems [56.05635751529922]
This class of algorithms encompasses several randomized methods among the fastest solvers for least-squares problems.
We focus on two classical embeddings, namely, Gaussian projections and subsampled Hadamard transforms.
Our resulting algorithm yields the best complexity known for solving least-squares problems with no condition number dependence.
arXiv Detail & Related papers (2020-02-21T17:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.