Implicit Regularization of Sub-Gradient Method in Robust Matrix
Recovery: Don't be Afraid of Outliers
- URL: http://arxiv.org/abs/2102.02969v1
- Date: Fri, 5 Feb 2021 02:52:00 GMT
- Title: Implicit Regularization of Sub-Gradient Method in Robust Matrix
Recovery: Don't be Afraid of Outliers
- Authors: Jianhao Ma and Salar Fattahi
- Abstract summary: We show that a simple sub-gradient method converges to the true low-rank solution efficiently.
We also build upon a new notion of restricted isometry property, called sign-RIP, to prove the robustness of the method.
- Score: 6.320141734801679
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is well-known that simple short-sighted algorithms, such as gradient
descent, generalize well in the over-parameterized learning tasks, due to their
implicit regularization. However, it is unknown whether the implicit
regularization of these algorithms can be extended to robust learning tasks,
where a subset of samples may be grossly corrupted with noise. In this work, we
provide a positive answer to this question in the context of robust matrix
recovery problem. In particular, we consider the problem of recovering a
low-rank matrix from a number of linear measurements, where a subset of
measurements are corrupted with large noise. We show that a simple sub-gradient
method converges to the true low-rank solution efficiently, when it is applied
to the over-parameterized l1-loss function without any explicit regularization
or rank constraint. Moreover, by building upon a new notion of restricted
isometry property, called sign-RIP, we prove the robustness of the sub-gradient
method against outliers in the over-parameterized regime. In particular, we
show that, with Gaussian measurements, the sub-gradient method is guaranteed to
converge to the true low-rank solution, even if an arbitrary fraction of the
measurements are grossly corrupted with noise.
Related papers
- Riemannian stochastic optimization methods avoid strict saddle points [68.80251170757647]
We show that policies under study avoid strict saddle points / submanifolds with probability 1.
This result provides an important sanity check as it shows that, almost always, the limit state of an algorithm can only be a local minimizer.
arXiv Detail & Related papers (2023-11-04T11:12:24Z) - Spectral Entry-wise Matrix Estimation for Low-Rank Reinforcement
Learning [53.445068584013896]
We study matrix estimation problems arising in reinforcement learning (RL) with low-rank structure.
In low-rank bandits, the matrix to be recovered specifies the expected arm rewards, and for low-rank Markov Decision Processes (MDPs), it may for example characterize the transition kernel of the MDP.
We show that simple spectral-based matrix estimation approaches efficiently recover the singular subspaces of the matrix and exhibit nearly-minimal entry-wise error.
arXiv Detail & Related papers (2023-10-10T17:06:41Z) - Robust online joint state/input/parameter estimation of linear systems [0.0]
This paper presents a method for jointly estimating the state, input, and parameters of linear systems in an online fashion.
The method is specially designed for measurements that are corrupted with non-Gaussian noise or outliers.
arXiv Detail & Related papers (2022-04-12T09:41:28Z) - Global Convergence of Sub-gradient Method for Robust Matrix Recovery:
Small Initialization, Noisy Measurements, and Over-parameterization [4.7464518249313805]
Sub-gradient method (SubGM) is used to recover a low-rank matrix from a limited number of measurements.
We show that SubGM converges to the true solution, even under arbitrarily large and arbitrarily dense noise values.
arXiv Detail & Related papers (2022-02-17T17:50:04Z) - Rank Overspecified Robust Matrix Recovery: Subgradient Method and Exact
Recovery [37.05862765171643]
We consider a robust factorization of a low-rank matrix with no prior knowledge on the rank.
We show that our particular design of diminishing the size of the matrix effectively prevents overfitting under overized models.
arXiv Detail & Related papers (2021-09-23T05:54:46Z) - Solving weakly supervised regression problem using low-rank manifold
regularization [77.34726150561087]
We solve a weakly supervised regression problem.
Under "weakly" we understand that for some training points the labels are known, for some unknown, and for others uncertain due to the presence of random noise or other reasons such as lack of resources.
In the numerical section, we applied the suggested method to artificial and real datasets using Monte-Carlo modeling.
arXiv Detail & Related papers (2021-04-13T23:21:01Z) - Dimension Free Generalization Bounds for Non Linear Metric Learning [61.193693608166114]
We provide uniform generalization bounds for two regimes -- the sparse regime, and a non-sparse regime.
We show that by relying on a different, new property of the solutions, it is still possible to provide dimension free generalization guarantees.
arXiv Detail & Related papers (2021-02-07T14:47:00Z) - Low-Rank Matrix Recovery with Scaled Subgradient Methods: Fast and
Robust Convergence Without the Condition Number [34.0533596121548]
Many problems in data science can be treated as estimating a low-rank from highly incomplete, sometimes even corrupted, observations.
One popular approach is to resort to matrix factorization, where the low-rank matrix factors are optimized via first-order methods over a smooth loss.
arXiv Detail & Related papers (2020-10-26T06:21:14Z) - Learning Mixtures of Low-Rank Models [89.39877968115833]
We study the problem of learning computational mixtures of low-rank models.
We develop an algorithm that is guaranteed to recover the unknown matrices with near-optimal sample.
In addition, the proposed algorithm is provably stable against random noise.
arXiv Detail & Related papers (2020-09-23T17:53:48Z) - Robust Compressed Sensing using Generative Models [98.64228459705859]
In this paper we propose an algorithm inspired by the Median-of-Means (MOM)
Our algorithm guarantees recovery for heavy-tailed data, even in the presence of outliers.
arXiv Detail & Related papers (2020-06-16T19:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.