Outlier-Robust Sparse Estimation via Non-Convex Optimization
- URL: http://arxiv.org/abs/2109.11515v1
- Date: Thu, 23 Sep 2021 17:38:24 GMT
- Title: Outlier-Robust Sparse Estimation via Non-Convex Optimization
- Authors: Yu Cheng, Ilias Diakonikolas, Daniel M. Kane, Rong Ge, Shivam Gupta,
Mahdi Soltanolkotabi
- Abstract summary: We explore the connection between high-dimensional statistics and non-robust optimization in the presence of sparsity constraints.
We develop novel and simple optimization formulations for these problems.
As a corollary, we obtain that any first-order method that efficiently converges to station yields an efficient algorithm for these tasks.
- Score: 73.18654719887205
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore the connection between outlier-robust high-dimensional statistics
and non-convex optimization in the presence of sparsity constraints, with a
focus on the fundamental tasks of robust sparse mean estimation and robust
sparse PCA. We develop novel and simple optimization formulations for these
problems such that any approximate stationary point of the associated
optimization problem yields a near-optimal solution for the underlying robust
estimation task. As a corollary, we obtain that any first-order method that
efficiently converges to stationarity yields an efficient algorithm for these
tasks. The obtained algorithms are simple, practical, and succeed under broader
distributional assumptions compared to prior work.
Related papers
- Low-Rank Extragradient Methods for Scalable Semidefinite Optimization [0.0]
We focus on high-dimensional and plausible settings in which the problem admits a low-rank solution.
We provide several theoretical results proving that, under these circumstances, the well-known Extragradient method converges to a solution of the constrained optimization problem.
arXiv Detail & Related papers (2024-02-14T10:48:00Z) - Efficient Computation of Sparse and Robust Maximum Association
Estimators [0.5156484100374059]
High-dimensional empirical examples underline the usefulness of this procedure.
A combination of Lagrangian algorithm and sparse descent is implemented to also include suitable constraints for inducing sparse sparsity.
arXiv Detail & Related papers (2023-11-29T11:57:50Z) - Learning to Optimize with Stochastic Dominance Constraints [103.26714928625582]
In this paper, we develop a simple yet efficient approach for the problem of comparing uncertain quantities.
We recast inner optimization in the Lagrangian as a learning problem for surrogate approximation, which bypasses apparent intractability.
The proposed light-SD demonstrates superior performance on several representative problems ranging from finance to supply chain management.
arXiv Detail & Related papers (2022-11-14T21:54:31Z) - Exploring the Algorithm-Dependent Generalization of AUPRC Optimization
with List Stability [107.65337427333064]
optimization of the Area Under the Precision-Recall Curve (AUPRC) is a crucial problem for machine learning.
In this work, we present the first trial in the single-dependent generalization of AUPRC optimization.
Experiments on three image retrieval datasets on speak to the effectiveness and soundness of our framework.
arXiv Detail & Related papers (2022-09-27T09:06:37Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Adaptive Sampling for Best Policy Identification in Markov Decision
Processes [79.4957965474334]
We investigate the problem of best-policy identification in discounted Markov Decision (MDPs) when the learner has access to a generative model.
The advantages of state-of-the-art algorithms are discussed and illustrated.
arXiv Detail & Related papers (2020-09-28T15:22:24Z) - High-Dimensional Robust Mean Estimation via Gradient Descent [73.61354272612752]
We show that the problem of robust mean estimation in the presence of a constant adversarial fraction can be solved by gradient descent.
Our work establishes an intriguing connection between the near non-lemma estimation and robust statistics.
arXiv Detail & Related papers (2020-05-04T10:48:04Z) - Bayesian Optimization with Output-Weighted Optimal Sampling [0.0]
We advocate the use of the likelihood ratio to guide the search algorithm towards regions of the input space where the objective function to be minimized assumes abnormally small values.
The "likelihood-weighted" acquisition functions introduced in this work are found to outperform their unweighted counterparts in a number of applications.
arXiv Detail & Related papers (2020-04-22T14:38:39Z) - Stochastic Optimization for Regularized Wasserstein Estimators [10.194798773447879]
We introduce an algorithm to solve a regularized version of the problem of Wasserstein estimators gradient, with a time per step which is sublinear in the natural dimensions.
We show that this algorithm can be extended to other tasks, including estimation of Wasserstein barycenters.
arXiv Detail & Related papers (2020-02-20T12:04:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.