Square Root Principal Component Pursuit: Tuning-Free Noisy Robust Matrix
Recovery
- URL: http://arxiv.org/abs/2106.09211v1
- Date: Thu, 17 Jun 2021 02:28:11 GMT
- Title: Square Root Principal Component Pursuit: Tuning-Free Noisy Robust Matrix
Recovery
- Authors: Junhui Zhang, Jingkai Yan, John Wright
- Abstract summary: We propose a new framework for low-rank matrix recovery from observations corrupted with noise and outliers.
Inspired by the square root Lasso, this new formulation does not require prior knowledge of the noise level.
We show that a single, universal choice of the regularization parameter suffices to achieve reconstruction error proportional to the (a priori unknown) noise level.
- Score: 8.581512812219737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new framework -- Square Root Principal Component Pursuit -- for
low-rank matrix recovery from observations corrupted with noise and outliers.
Inspired by the square root Lasso, this new formulation does not require prior
knowledge of the noise level. We show that a single, universal choice of the
regularization parameter suffices to achieve reconstruction error proportional
to the (a priori unknown) noise level. In comparison, previous formulations
such as stable PCP rely on noise-dependent parameters to achieve similar
performance, and are therefore challenging to deploy in applications where the
noise level is unknown. We validate the effectiveness of our new method through
experiments on simulated and real datasets. Our simulations corroborate the
claim that a universal choice of the regularization parameter yields near
optimal performance across a range of noise levels, indicating that the
proposed method outperforms the (somewhat loose) bound proved here.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Robust Learning under Hybrid Noise [24.36707245704713]
We propose a novel unified learning framework called "Feature and Label Recovery" (FLR) to combat the hybrid noise from the perspective of data recovery.
arXiv Detail & Related papers (2024-07-04T16:13:25Z) - Bayesian Inference of General Noise Model Parameters from Surface Code's Syndrome Statistics [0.0]
We propose general noise model Bayesian inference methods that integrate the surface code's tensor network simulator.
For stationary noise, where the noise parameters are constant and do not change, we propose a method based on the Markov chain Monte Carlo.
For time-varying noise, which is a more realistic situation, we introduce another method based on the sequential Monte Carlo.
arXiv Detail & Related papers (2024-06-13T10:26:04Z) - ROPO: Robust Preference Optimization for Large Language Models [59.10763211091664]
We propose an iterative alignment approach that integrates noise-tolerance and filtering of noisy samples without the aid of external models.
Experiments on three widely-used datasets with Mistral-7B and Llama-2-7B demonstrate that ROPO significantly outperforms existing preference alignment methods.
arXiv Detail & Related papers (2024-04-05T13:58:51Z) - A Corrected Expected Improvement Acquisition Function Under Noisy
Observations [22.63212972670109]
Sequential of expected improvement (EI) is one of the most widely used policies in Bayesian optimization.
The uncertainty associated with the incumbent solution is often neglected in many analytic EI-type methods.
We propose a modification of EI that corrects its closed-form expression by incorporating the covariance information provided by the Gaussian Process (GP) model.
arXiv Detail & Related papers (2023-10-08T13:50:39Z) - Label Noise: Correcting the Forward-Correction [0.0]
Training neural network classifiers on datasets with label noise poses a risk of overfitting them to the noisy labels.
We propose an approach to tackling overfitting caused by label noise.
Motivated by this observation, we propose imposing a lower bound on the training loss to mitigate overfitting.
arXiv Detail & Related papers (2023-07-24T19:41:19Z) - Latent Class-Conditional Noise Model [54.56899309997246]
We introduce a Latent Class-Conditional Noise model (LCCN) to parameterize the noise transition under a Bayesian framework.
We then deduce a dynamic label regression method for LCCN, whose Gibbs sampler allows us efficiently infer the latent true labels.
Our approach safeguards the stable update of the noise transition, which avoids previous arbitrarily tuning from a mini-batch of samples.
arXiv Detail & Related papers (2023-02-19T15:24:37Z) - Optimizing the Noise in Self-Supervised Learning: from Importance
Sampling to Noise-Contrastive Estimation [80.07065346699005]
It is widely assumed that the optimal noise distribution should be made equal to the data distribution, as in Generative Adversarial Networks (GANs)
We turn to Noise-Contrastive Estimation which grounds this self-supervised task as an estimation problem of an energy-based model of the data.
We soberly conclude that the optimal noise may be hard to sample from, and the gain in efficiency can be modest compared to choosing the noise distribution equal to the data's.
arXiv Detail & Related papers (2023-01-23T19:57:58Z) - Partial Identification with Noisy Covariates: A Robust Optimization
Approach [94.10051154390237]
Causal inference from observational datasets often relies on measuring and adjusting for covariates.
We show that this robust optimization approach can extend a wide range of causal adjustment methods to perform partial identification.
Across synthetic and real datasets, we find that this approach provides ATE bounds with a higher coverage probability than existing methods.
arXiv Detail & Related papers (2022-02-22T04:24:26Z) - Shape Matters: Understanding the Implicit Bias of the Noise Covariance [76.54300276636982]
Noise in gradient descent provides a crucial implicit regularization effect for training over parameterized models.
We show that parameter-dependent noise -- induced by mini-batches or label perturbation -- is far more effective than Gaussian noise.
Our analysis reveals that parameter-dependent noise introduces a bias towards local minima with smaller noise variance, whereas spherical Gaussian noise does not.
arXiv Detail & Related papers (2020-06-15T18:31:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.