Robust Compressed Sensing using Generative Models
- URL: http://arxiv.org/abs/2006.09461v3
- Date: Wed, 23 Jun 2021 06:14:00 GMT
- Title: Robust Compressed Sensing using Generative Models
- Authors: Ajil Jalal, Liu Liu, Alexandros G. Dimakis, Constantine Caramanis
- Abstract summary: In this paper we propose an algorithm inspired by the Median-of-Means (MOM)
Our algorithm guarantees recovery for heavy-tailed data, even in the presence of outliers.
- Score: 98.64228459705859
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of compressed sensing is to estimate a high dimensional vector from
an underdetermined system of noisy linear equations. In analogy to classical
compressed sensing, here we assume a generative model as a prior, that is, we
assume the vector is represented by a deep generative model $G: \mathbb{R}^k
\rightarrow \mathbb{R}^n$. Classical recovery approaches such as empirical risk
minimization (ERM) are guaranteed to succeed when the measurement matrix is
sub-Gaussian. However, when the measurement matrix and measurements are
heavy-tailed or have outliers, recovery may fail dramatically. In this paper we
propose an algorithm inspired by the Median-of-Means (MOM). Our algorithm
guarantees recovery for heavy-tailed data, even in the presence of outliers.
Theoretically, our results show our novel MOM-based algorithm enjoys the same
sample complexity guarantees as ERM under sub-Gaussian assumptions. Our
experiments validate both aspects of our claims: other algorithms are indeed
fragile and fail under heavy-tailed and/or corrupted data, while our approach
exhibits the predicted robustness.
Related papers
- Probabilistic Unrolling: Scalable, Inverse-Free Maximum Likelihood
Estimation for Latent Gaussian Models [69.22568644711113]
We introduce probabilistic unrolling, a method that combines Monte Carlo sampling with iterative linear solvers to circumvent matrix inversions.
Our theoretical analyses reveal that unrolling and backpropagation through the iterations of the solver can accelerate gradient estimation for maximum likelihood estimation.
In experiments on simulated and real data, we demonstrate that probabilistic unrolling learns latent Gaussian models up to an order of magnitude faster than gradient EM, with minimal losses in model performance.
arXiv Detail & Related papers (2023-06-05T21:08:34Z) - Robust Matrix Completion with Heavy-tailed Noise [0.5837881923712392]
This paper studies low-rank matrix completion in the presence of heavy-tailed possibly asymmetric noise.
In this paper, we adopt adaptive Huber loss accommodate heavy-tailed noise, which is robust against large and possibly asymmetric errors.
We prove that under merely a second moment condition on the error, the Euclidean error falls geometrically fast until achieving a minimax-optimal statistical estimation error.
arXiv Detail & Related papers (2022-06-09T04:48:48Z) - Global Convergence of Sub-gradient Method for Robust Matrix Recovery:
Small Initialization, Noisy Measurements, and Over-parameterization [4.7464518249313805]
Sub-gradient method (SubGM) is used to recover a low-rank matrix from a limited number of measurements.
We show that SubGM converges to the true solution, even under arbitrarily large and arbitrarily dense noise values.
arXiv Detail & Related papers (2022-02-17T17:50:04Z) - Robust Algorithms for GMM Estimation: A Finite Sample Viewpoint [30.839245814393724]
A generic method of solving moment conditions is the Generalized Method of Moments (GMM)
We develop a GMM estimator that can tolerate a constant $ell$ recovery guarantee of $O(sqrtepsilon)$.
Our algorithm and assumptions apply to instrumental variables linear and logistic regression.
arXiv Detail & Related papers (2021-10-06T21:06:22Z) - A Precise Performance Analysis of Support Vector Regression [105.94855998235232]
We study the hard and soft support vector regression techniques applied to a set of $n$ linear measurements.
Our results are then used to optimally tune the parameters intervening in the design of hard and soft support vector regression algorithms.
arXiv Detail & Related papers (2021-05-21T14:26:28Z) - Continual Learning with Fully Probabilistic Models [70.3497683558609]
We present an approach for continual learning based on fully probabilistic (or generative) models of machine learning.
We propose a pseudo-rehearsal approach using a Gaussian Mixture Model (GMM) instance for both generator and classifier functionalities.
We show that GMR achieves state-of-the-art performance on common class-incremental learning problems at very competitive time and memory complexity.
arXiv Detail & Related papers (2021-04-19T12:26:26Z) - Shaping Deep Feature Space towards Gaussian Mixture for Visual
Classification [74.48695037007306]
We propose a Gaussian mixture (GM) loss function for deep neural networks for visual classification.
With a classification margin and a likelihood regularization, the GM loss facilitates both high classification performance and accurate modeling of the feature distribution.
The proposed model can be implemented easily and efficiently without using extra trainable parameters.
arXiv Detail & Related papers (2020-11-18T03:32:27Z) - Online and Distribution-Free Robustness: Regression and Contextual
Bandits with Huber Contamination [29.85468294601847]
We revisit two classic high-dimensional online learning problems, namely linear regression and contextual bandits.
We show that our algorithms succeed where conventional methods fail.
arXiv Detail & Related papers (2020-10-08T17:59:05Z) - Robust subgaussian estimation with VC-dimension [0.0]
This work proposes a new general way to bound the excess risk for MOM estimators.
The core technique is the use of VC-dimension (instead of Rademacher complexity) to measure the statistical complexity.
arXiv Detail & Related papers (2020-04-24T13:21:09Z) - Sample Complexity Bounds for 1-bit Compressive Sensing and Binary Stable
Embeddings with Generative Priors [52.06292503723978]
Motivated by advances in compressive sensing with generative models, we study the problem of 1-bit compressive sensing with generative models.
We first consider noiseless 1-bit measurements, and provide sample complexity bounds for approximate recovery under i.i.d.Gaussian measurements.
We demonstrate that the Binary $epsilon$-Stable Embedding property, which characterizes the robustness of the reconstruction to measurement errors and noise, also holds for 1-bit compressive sensing with Lipschitz continuous generative models.
arXiv Detail & Related papers (2020-02-05T09:44:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.