Robust Methods for High-Dimensional Linear Learning
- URL: http://arxiv.org/abs/2208.05447v2
- Date: Mon, 29 May 2023 07:22:21 GMT
- Title: Robust Methods for High-Dimensional Linear Learning
- Authors: Ibrahim Merad and St\'ephane Ga\"iffas
- Abstract summary: We propose statistically robust and computationally efficient linear learning methods in the high-dimensional batch setting.
We instantiate our framework on several applications including vanilla sparse, group-sparse and low-rank matrix recovery.
For vanilla $s$-sparsity, we are able to reach the $slog (d)/n$ rate under heavy-tails and $eta$-corruption.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose statistically robust and computationally efficient linear learning
methods in the high-dimensional batch setting, where the number of features $d$
may exceed the sample size $n$. We employ, in a generic learning setting, two
algorithms depending on whether the considered loss function is
gradient-Lipschitz or not. Then, we instantiate our framework on several
applications including vanilla sparse, group-sparse and low-rank matrix
recovery. This leads, for each application, to efficient and robust learning
algorithms, that reach near-optimal estimation rates under heavy-tailed
distributions and the presence of outliers. For vanilla $s$-sparsity, we are
able to reach the $s\log (d)/n$ rate under heavy-tails and $\eta$-corruption,
at a computational cost comparable to that of non-robust analogs. We provide an
efficient implementation of our algorithms in an open-source $\mathtt{Python}$
library called $\mathtt{linlearn}$, by means of which we carry out numerical
experiments which confirm our theoretical findings together with a comparison
to other recent approaches proposed in the literature.
Related papers
- Robust Sparse Estimation for Gaussians with Optimal Error under Huber Contamination [42.526664955704746]
We study sparse estimation tasks in Huber's contamination model with a focus on mean estimation, PCA, and linear regression.
For each of these tasks, we give the first sample and computationally efficient robust estimators with optimal error guarantees.
At the technical level, we develop a novel multidimensional filtering method in the sparse regime that may find other applications.
arXiv Detail & Related papers (2024-03-15T15:51:27Z) - Robust Second-Order Nonconvex Optimization and Its Application to Low Rank Matrix Sensing [47.32366699399839]
Finding an approximate secondepsilon dependence (SOSP) is a well-studied and fundamental problem.
In this paper we apply our framework to the problem of lowdimension sensing machine optimization.
arXiv Detail & Related papers (2024-03-12T01:27:44Z) - Computational-Statistical Gaps for Improper Learning in Sparse Linear Regression [4.396860522241307]
We show that an efficient learning algorithm for sparse linear regression can be used to solve sparse PCA problems with a negative spike.
We complement our reduction with low-degree and statistical query lower bounds for the sparse problems from which we reduce.
arXiv Detail & Related papers (2024-02-21T19:55:01Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Differentially-Private Hierarchical Clustering with Provable
Approximation Guarantees [79.59010418610625]
We study differentially private approximation algorithms for hierarchical clustering.
We show strong lower bounds for the problem: that any $epsilon$-DP algorithm must exhibit $O(|V|2/ epsilon)$-additive error for an input dataset.
We propose a private $1+o(1)$ approximation algorithm which also recovers the blocks exactly.
arXiv Detail & Related papers (2023-01-31T19:14:30Z) - Communication-Efficient Adam-Type Algorithms for Distributed Data Mining [93.50424502011626]
We propose a class of novel distributed Adam-type algorithms (emphi.e., SketchedAMSGrad) utilizing sketching.
Our new algorithm achieves a fast convergence rate of $O(frac1sqrtnT + frac1(k/d)2 T)$ with the communication cost of $O(k log(d))$ at each iteration.
arXiv Detail & Related papers (2022-10-14T01:42:05Z) - Scalable Differentially Private Clustering via Hierarchically Separated
Trees [82.69664595378869]
We show that our method computes a solution with cost at most $O(d3/2log n)cdot OPT + O(k d2 log2 n / epsilon2)$, where $epsilon$ is the privacy guarantee.
Although the worst-case guarantee is worse than that of state of the art private clustering methods, the algorithm we propose is practical.
arXiv Detail & Related papers (2022-06-17T09:24:41Z) - Linear Algorithms for Nonparametric Multiclass Probability Estimation [0.0]
Support Vector Machines (wSVMs) have been developed to estimate class probabilities through ensemble learning.
We propose two new learning schemes, the baseline learning and the One-vs. All (OVA) learning, to further improve wSVMs in terms of computational efficiency and estimation accuracy.
arXiv Detail & Related papers (2022-05-25T03:15:22Z) - Under-bagging Nearest Neighbors for Imbalanced Classification [63.026765294759876]
We propose an ensemble learning algorithm called textitunder-bagging $k$-NN (textitunder-bagging $k$-NN) for imbalanced classification problems.
arXiv Detail & Related papers (2021-09-01T14:10:38Z) - Randomized Exploration for Reinforcement Learning with General Value
Function Approximation [122.70803181751135]
We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm.
Our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises.
We complement the theory with an empirical evaluation across known difficult exploration tasks.
arXiv Detail & Related papers (2021-06-15T02:23:07Z) - Learning Sparse Classifiers: Continuous and Mixed Integer Optimization
Perspectives [10.291482850329892]
Mixed integer programming (MIP) can be used to solve (to optimality) $ell_0$-regularized regression problems.
We propose two classes of scalable algorithms: an exact algorithm that can handlepapprox 50,000$ features in a few minutes, and approximate algorithms that can address instances with $papprox6$.
In addition, we present new estimation error bounds for $ell$-regularizeds.
arXiv Detail & Related papers (2020-01-17T18:47:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.