Optimal Robust Linear Regression in Nearly Linear Time
- URL: http://arxiv.org/abs/2007.08137v1
- Date: Thu, 16 Jul 2020 06:44:44 GMT
- Title: Optimal Robust Linear Regression in Nearly Linear Time
- Authors: Yeshwanth Cherapanamjeri, Efe Aras, Nilesh Tripuraneni, Michael I.
Jordan, Nicolas Flammarion, Peter L. Bartlett
- Abstract summary: We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
- Score: 97.11565882347772
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of high-dimensional robust linear regression where a
learner is given access to $n$ samples from the generative model $Y = \langle
X,w^* \rangle + \epsilon$ (with $X \in \mathbb{R}^d$ and $\epsilon$
independent), in which an $\eta$ fraction of the samples have been
adversarially corrupted. We propose estimators for this problem under two
settings: (i) $X$ is L4-L2 hypercontractive, $\mathbb{E} [XX^\top]$ has bounded
condition number and $\epsilon$ has bounded variance and (ii) $X$ is
sub-Gaussian with identity second moment and $\epsilon$ is sub-Gaussian. In
both settings, our estimators: (a) Achieve optimal sample complexities and
recovery guarantees up to log factors and (b) Run in near linear time
($\tilde{O}(nd / \eta^6)$). Prior to our work, polynomial time algorithms
achieving near optimal sample complexities were only known in the setting where
$X$ is Gaussian with identity covariance and $\epsilon$ is Gaussian, and no
linear time estimators were known for robust linear regression in any setting.
Our estimators and their analysis leverage recent developments in the
construction of faster algorithms for robust mean estimation to improve
runtimes, and refined concentration of measure arguments alongside Gaussian
rounding techniques to improve statistical sample complexities.
Related papers
- Efficient Statistics With Unknown Truncation, Polynomial Time Algorithms, Beyond Gaussians [7.04316974339151]
We study the estimation of distributional parameters when samples are shown only if they fall in some unknown set.
We develop tools that may be of independent interest, including a reduction from PAC learning with positive and unlabeled samples to PAC learning with positive and negative samples.
arXiv Detail & Related papers (2024-10-02T15:21:07Z) - Iterative thresholding for non-linear learning in the strong $\varepsilon$-contamination model [3.309767076331365]
We derive approximation bounds for learning single neuron models using thresholded descent.
We also study the linear regression problem, where $sigma(mathbfx) = mathbfx$.
arXiv Detail & Related papers (2024-09-05T16:59:56Z) - Projection by Convolution: Optimal Sample Complexity for Reinforcement Learning in Continuous-Space MDPs [56.237917407785545]
We consider the problem of learning an $varepsilon$-optimal policy in a general class of continuous-space Markov decision processes (MDPs) having smooth Bellman operators.
Key to our solution is a novel projection technique based on ideas from harmonic analysis.
Our result bridges the gap between two popular but conflicting perspectives on continuous-space MDPs.
arXiv Detail & Related papers (2024-05-10T09:58:47Z) - Robust Sparse Estimation for Gaussians with Optimal Error under Huber Contamination [42.526664955704746]
We study sparse estimation tasks in Huber's contamination model with a focus on mean estimation, PCA, and linear regression.
For each of these tasks, we give the first sample and computationally efficient robust estimators with optimal error guarantees.
At the technical level, we develop a novel multidimensional filtering method in the sparse regime that may find other applications.
arXiv Detail & Related papers (2024-03-15T15:51:27Z) - Near-Optimal Algorithms for Gaussians with Huber Contamination: Mean
Estimation and Linear Regression [44.13655983242414]
We design the first sample near- and almost linear-time algorithms with optimal error guarantees.
For robust linear regression, we give the first algorithm with sample complexity $n = tildeO(d/epsilon2)$ and almost linear runtime that approximates the target regressor within $ell$- $O(epsilon)$.
This is the first sample and time algorithm achieving the optimal error guarantee, answering an open question in the literature.
arXiv Detail & Related papers (2023-12-04T00:31:16Z) - A Unified Framework for Uniform Signal Recovery in Nonlinear Generative
Compressed Sensing [68.80803866919123]
Under nonlinear measurements, most prior results are non-uniform, i.e., they hold with high probability for a fixed $mathbfx*$ rather than for all $mathbfx*$ simultaneously.
Our framework accommodates GCS with 1-bit/uniformly quantized observations and single index models as canonical examples.
We also develop a concentration inequality that produces tighter bounds for product processes whose index sets have low metric entropy.
arXiv Detail & Related papers (2023-09-25T17:54:19Z) - Stochastic Approximation Approaches to Group Distributionally Robust
Optimization [96.26317627118912]
Group distributionally robust optimization (GDRO)
Online learning techniques to reduce the number of samples required in each round from $m$ to $1$, keeping the same sample.
A novel formulation of weighted GDRO, which allows us to derive distribution-dependent convergence rates.
arXiv Detail & Related papers (2023-02-18T09:24:15Z) - Robust Sub-Gaussian Principal Component Analysis and Width-Independent
Schatten Packing [22.337756118270217]
We develop two methods for a fundamental statistical task: given an $epsilon$-corrupted set of $n$ samples from a $d$-linear sub-Gaussian distribution.
Our first robust algorithm runs iterative filtering in time, returns an approximate eigenvector, and is based on a simple filtering approach.
Our second, which attains a slightly worse approximation factor, runs in nearly-trivial time and iterations under a mild spectral gap assumption.
arXiv Detail & Related papers (2020-06-12T07:45:38Z) - Sample Complexity of Asynchronous Q-Learning: Sharper Analysis and
Variance Reduction [63.41789556777387]
Asynchronous Q-learning aims to learn the optimal action-value function (or Q-function) of a Markov decision process (MDP)
We show that the number of samples needed to yield an entrywise $varepsilon$-accurate estimate of the Q-function is at most on the order of $frac1mu_min (1-gamma)5varepsilon2+ fract_mixmu_min (1-gamma)$ up to some logarithmic factor.
arXiv Detail & Related papers (2020-06-04T17:51:00Z) - Agnostic Learning of a Single Neuron with Gradient Descent [92.7662890047311]
We consider the problem of learning the best-fitting single neuron as measured by the expected square loss.
For the ReLU activation, our population risk guarantee is $O(mathsfOPT1/2)+epsilon$.
For the ReLU activation, our population risk guarantee is $O(mathsfOPT1/2)+epsilon$.
arXiv Detail & Related papers (2020-05-29T07:20:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.