Sharp Restricted Isometry Property Bounds for Low-rank Matrix Recovery
Problems with Corrupted Measurements
- URL: http://arxiv.org/abs/2105.08232v4
- Date: Tue, 25 Jul 2023 07:23:18 GMT
- Title: Sharp Restricted Isometry Property Bounds for Low-rank Matrix Recovery
Problems with Corrupted Measurements
- Authors: Ziye Ma, Yingjie Bi, Javad Lavaei, Somayeh Sojoudi
- Abstract summary: We show a general low-rank matrix recovery problem with linear measurements corrupted by some noise.
We present a local guarantee for problems with an arbitrary RIP constant, which states that any local saddler is perturbed close to the ground or far from it.
- Score: 18.16299386229588
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study a general low-rank matrix recovery problem with
linear measurements corrupted by some noise. The objective is to understand
under what conditions on the restricted isometry property (RIP) of the problem
local search methods can find the ground truth with a small error. By analyzing
the landscape of the non-convex problem, we first propose a global guarantee on
the maximum distance between an arbitrary local minimizer and the ground truth
under the assumption that the RIP constant is smaller than $1/2$. We show that
this distance shrinks to zero as the intensity of the noise reduces. Our new
guarantee is sharp in terms of the RIP constant and is much stronger than the
existing results. We then present a local guarantee for problems with an
arbitrary RIP constant, which states that any local minimizer is either
considerably close to the ground truth or far away from it. Next, we prove the
strict saddle property, which guarantees the global convergence of the
perturbed gradient descent method in polynomial time. The developed results
demonstrate how the noise intensity and the RIP constant of the problem affect
the landscape of the problem.
Related papers
- Stochastic Optimization with Constraints: A Non-asymptotic Instance-Dependent Analysis [2.1756081703276]
We analyze the behavior of a natural variance reduced proximal gradient (VRPG) algorithm for convex optimization under convex constraints.
Our main result is a non-asymptotic guarantee for VRPG algorithm.
We show that our guarantee captures the complexity of the loss function, the variability of the noise, and the geometry of the constraint set.
arXiv Detail & Related papers (2024-03-24T14:45:11Z) - Riemannian stochastic optimization methods avoid strict saddle points [68.80251170757647]
We show that policies under study avoid strict saddle points / submanifolds with probability 1.
This result provides an important sanity check as it shows that, almost always, the limit state of an algorithm can only be a local minimizer.
arXiv Detail & Related papers (2023-11-04T11:12:24Z) - Fast and Accurate Estimation of Low-Rank Matrices from Noisy
Measurements via Preconditioned Non-Convex Gradient Descent [11.74020933567308]
Non- gradient descent is a common approach for estimating a low-rankntimes n ground truth matrix from noisy measurements.
In this paper, we describe how preconditioning should be done for noisy measurements to accelerate local convergence to minimax optimality.
arXiv Detail & Related papers (2023-05-26T19:32:07Z) - Revisiting Rotation Averaging: Uncertainties and Robust Losses [51.64986160468128]
We argue that the main problem of current methods is the minimized cost function that is only weakly connected with the input data via the estimated epipolar.
We propose to better model the underlying noise distributions by directly propagating the uncertainty from the point correspondences into the rotation averaging.
arXiv Detail & Related papers (2023-03-09T11:51:20Z) - Noisy Low-rank Matrix Optimization: Geometry of Local Minima and
Convergence Rate [14.191310794366075]
This paper is concerned with low-rank matrix optimization, which has found a wide range of applications in machine learning.
We develop a framework that can deal with random corruptions to general objective functions, where the noise model is arbitrary.
We characterize the geometry of the spurious local minima of the problem in a local region around ground truth in the case when the RIP constant is greater than $1/3$.
arXiv Detail & Related papers (2022-03-08T07:44:47Z) - Global Convergence of Sub-gradient Method for Robust Matrix Recovery:
Small Initialization, Noisy Measurements, and Over-parameterization [4.7464518249313805]
Sub-gradient method (SubGM) is used to recover a low-rank matrix from a limited number of measurements.
We show that SubGM converges to the true solution, even under arbitrarily large and arbitrarily dense noise values.
arXiv Detail & Related papers (2022-02-17T17:50:04Z) - Minimax Optimization: The Case of Convex-Submodular [50.03984152441271]
Minimax problems extend beyond the continuous domain to mixed continuous-discrete domains or even fully discrete domains.
We introduce the class of convex-submodular minimax problems, where the objective is convex with respect to the continuous variable and submodular with respect to the discrete variable.
Our proposed algorithms are iterative and combine tools from both discrete and continuous optimization.
arXiv Detail & Related papers (2021-11-01T21:06:35Z) - Risk Minimization from Adaptively Collected Data: Guarantees for
Supervised and Policy Learning [57.88785630755165]
Empirical risk minimization (ERM) is the workhorse of machine learning, but its model-agnostic guarantees can fail when we use adaptively collected data.
We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class.
For policy learning, we provide rate-optimal regret guarantees that close an open gap in the existing literature whenever exploration decays to zero.
arXiv Detail & Related papers (2021-06-03T09:50:13Z) - Implicit Regularization of Sub-Gradient Method in Robust Matrix
Recovery: Don't be Afraid of Outliers [6.320141734801679]
We show that a simple sub-gradient method converges to the true low-rank solution efficiently.
We also build upon a new notion of restricted isometry property, called sign-RIP, to prove the robustness of the method.
arXiv Detail & Related papers (2021-02-05T02:52:00Z) - Computational Barriers to Estimation from Low-Degree Polynomials [81.67886161671379]
We study the power of low-degrees for the task of detecting the presence of hidden structures.
For a large class of "signal plus noise" problems, we give a user-friendly lower bound for the best possible mean squared error achievable by any degree.
As applications, we give a tight characterization of the low-degree minimum mean squared error for the planted submatrix and planted dense subgraph problems.
arXiv Detail & Related papers (2020-08-05T17:52:10Z) - Log-Likelihood Ratio Minimizing Flows: Towards Robust and Quantifiable
Neural Distribution Alignment [52.02794488304448]
We propose a new distribution alignment method based on a log-likelihood ratio statistic and normalizing flows.
We experimentally verify that minimizing the resulting objective results in domain alignment that preserves the local structure of input domains.
arXiv Detail & Related papers (2020-03-26T22:10:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.