Robust Mean Estimation in High Dimensions via $\ell_0$ Minimization
- URL: http://arxiv.org/abs/2008.09239v1
- Date: Fri, 21 Aug 2020 00:19:48 GMT
- Title: Robust Mean Estimation in High Dimensions via $\ell_0$ Minimization
- Authors: Jing Liu, Aditya Deshmukh, Venugopal V. Veeravalli
- Abstract summary: We study the robust mean estimation problem in high dimensions, where $alpha 0.5$ fraction of the data points can be arbitrarily corrupted.
Motivated by compressive sensing, we formulate the robust mean estimation problem as the minimization of the $ell_p$ $(0p1)$.
Both synthetic and real data experiments demonstrate that the proposed algorithms significantly outperform state-of-the-art robust mean estimation methods.
- Score: 21.65637588606572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the robust mean estimation problem in high dimensions, where $\alpha
<0.5$ fraction of the data points can be arbitrarily corrupted. Motivated by
compressive sensing, we formulate the robust mean estimation problem as the
minimization of the $\ell_0$-`norm' of the outlier indicator vector, under
second moment constraints on the inlier data points. We prove that the global
minimum of this objective is order optimal for the robust mean estimation
problem, and we propose a general framework for minimizing the objective. We
further leverage the $\ell_1$ and $\ell_p$ $(0<p<1)$, minimization techniques
in compressive sensing to provide computationally tractable solutions to the
$\ell_0$ minimization problem. Both synthetic and real data experiments
demonstrate that the proposed algorithms significantly outperform
state-of-the-art robust mean estimation methods.
Related papers
- Active Subsampling for Measurement-Constrained M-Estimation of Individualized Thresholds with High-Dimensional Data [3.1138411427556445]
In the measurement-constrained problems, despite the availability of large datasets, we may be only affordable to observe the labels on a small portion of the large dataset.
This poses a critical question that which data points are most beneficial to label given a budget constraint.
In this paper, we focus on the estimation of the optimal individualized threshold in a measurement-constrained M-estimation framework.
arXiv Detail & Related papers (2024-11-21T00:21:17Z) - Regret Minimization and Statistical Inference in Online Decision Making with High-dimensional Covariates [7.21848268647674]
We integrate the $varepsilon$-greedy bandit algorithm for decision-making with a hard thresholding algorithm for estimating sparse bandit parameters.
Under a margin condition, our method achieves either $O(T1/2)$ regret or classical $O(T1/2)$-consistent inference.
arXiv Detail & Related papers (2024-11-10T01:47:11Z) - Robust Second-Order Nonconvex Optimization and Its Application to Low Rank Matrix Sensing [47.32366699399839]
Finding an approximate secondepsilon dependence (SOSP) is a well-studied and fundamental problem.
In this paper we apply our framework to the problem of lowdimension sensing machine optimization.
arXiv Detail & Related papers (2024-03-12T01:27:44Z) - Robust Sparse Mean Estimation via Incremental Learning [15.536082641659423]
In this paper, we study the problem of robust mean estimation, where the goal is to estimate a $k$-sparse mean from a collection of partially corrupted samples.
We present a simple mean estimator that overcomes both challenges under moderate conditions.
Our method does not need any prior knowledge of the sparsity level $k$.
arXiv Detail & Related papers (2023-05-24T16:02:28Z) - Estimating the minimizer and the minimum value of a regression function
under passive design [72.85024381807466]
We propose a new method for estimating the minimizer $boldsymbolx*$ and the minimum value $f*$ of a smooth and strongly convex regression function $f$.
We derive non-asymptotic upper bounds for the quadratic risk and optimization error of $boldsymbolz_n$, and for the risk of estimating $f*$.
arXiv Detail & Related papers (2022-11-29T18:38:40Z) - Best Policy Identification in Linear MDPs [70.57916977441262]
We investigate the problem of best identification in discounted linear Markov+Delta Decision in the fixed confidence setting under a generative model.
The lower bound as the solution of an intricate non- optimization program can be used as the starting point to devise such algorithms.
arXiv Detail & Related papers (2022-08-11T04:12:50Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Robust estimation via generalized quasi-gradients [28.292300073453877]
We show why many recently proposed robust estimation problems are efficiently solvable.
We identify the existence of "generalized quasi-gradients"
We show that generalized quasi-gradients exist and construct efficient algorithms.
arXiv Detail & Related papers (2020-05-28T15:14:33Z) - Breaking the Sample Size Barrier in Model-Based Reinforcement Learning
with a Generative Model [50.38446482252857]
This paper is concerned with the sample efficiency of reinforcement learning, assuming access to a generative model (or simulator)
We first consider $gamma$-discounted infinite-horizon Markov decision processes (MDPs) with state space $mathcalS$ and action space $mathcalA$.
We prove that a plain model-based planning algorithm suffices to achieve minimax-optimal sample complexity given any target accuracy level.
arXiv Detail & Related papers (2020-05-26T17:53:18Z) - High-Dimensional Robust Mean Estimation via Gradient Descent [73.61354272612752]
We show that the problem of robust mean estimation in the presence of a constant adversarial fraction can be solved by gradient descent.
Our work establishes an intriguing connection between the near non-lemma estimation and robust statistics.
arXiv Detail & Related papers (2020-05-04T10:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.