Robust Training in High Dimensions via Block Coordinate Geometric Median
Descent
- URL: http://arxiv.org/abs/2106.08882v1
- Date: Wed, 16 Jun 2021 15:55:50 GMT
- Title: Robust Training in High Dimensions via Block Coordinate Geometric Median
Descent
- Authors: Anish Acharya, Abolfazl Hashemi, Prateek Jain, Sujay Sanghavi,
Inderjit S. Dhillon, Ufuk Topcu
- Abstract summary: Geometric median (textGm) is a classical method in statistics for achieving a robust estimation of the uncorrupted data.
In this paper, we show that by that by applying textscGm to only a chosen block of coordinates at a time, one can retain a breakdown point of 0.5 judiciously for smooth nontext problems.
- Score: 69.47594803719333
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Geometric median (\textsc{Gm}) is a classical method in statistics for
achieving a robust estimation of the uncorrupted data; under gross corruption,
it achieves the optimal breakdown point of 0.5. However, its computational
complexity makes it infeasible for robustifying stochastic gradient descent
(SGD) for high-dimensional optimization problems. In this paper, we show that
by applying \textsc{Gm} to only a judiciously chosen block of coordinates at a
time and using a memory mechanism, one can retain the breakdown point of 0.5
for smooth non-convex problems, with non-asymptotic convergence rates
comparable to the SGD with \textsc{Gm}.
Related papers
- Locally Regularized Sparse Graph by Fast Proximal Gradient Descent [6.882546996728011]
We propose a novel Regularized Sparse Graph abbreviated SRSG.
Sparse graphs have been shown to be effective in clustering high-dimensional data.
We show that SRSG is superior to other clustering methods.
arXiv Detail & Related papers (2024-09-25T16:57:47Z) - Generalization Bounds for Stochastic Gradient Descent via Localized
$\varepsilon$-Covers [16.618918548497223]
We propose a new covering technique localized for the trajectories of SGD.
This localization provides an algorithm-specific clustering measured by the bounds number.
We derive these results in various contexts and improve the known state-of-the-art label rates.
arXiv Detail & Related papers (2022-09-19T12:11:07Z) - Optimal Extragradient-Based Bilinearly-Coupled Saddle-Point Optimization [116.89941263390769]
We consider the smooth convex-concave bilinearly-coupled saddle-point problem, $min_mathbfxmax_mathbfyF(mathbfx) + H(mathbfx,mathbfy)$, where one has access to first-order oracles for $F$, $G$ as well as the bilinear coupling function $H$.
We present a emphaccelerated gradient-extragradient (AG-EG) descent-ascent algorithm that combines extragrad
arXiv Detail & Related papers (2022-06-17T06:10:20Z) - Convergence of First-Order Methods for Constrained Nonconvex
Optimization with Dependent Data [7.513100214864646]
We show the worst-case complexity of convergence $tildeO(t-1/4)$ and MoreautildeO(vareps-4)$ for smooth non- optimization.
We obtain first online nonnegative matrix factorization algorithms for dependent data based on projected gradient methods with adaptive step sizes and optimal convergence.
arXiv Detail & Related papers (2022-03-29T17:59:10Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - STORM+: Fully Adaptive SGD with Momentum for Nonconvex Optimization [74.1615979057429]
We investigate non-batch optimization problems where the objective is an expectation over smooth loss functions.
Our work builds on the STORM algorithm, in conjunction with a novel approach to adaptively set the learning rate and momentum parameters.
arXiv Detail & Related papers (2021-11-01T15:43:36Z) - On Stochastic Moving-Average Estimators for Non-Convex Optimization [105.22760323075008]
In this paper, we demonstrate the power of a widely used estimator based on moving average (SEMA) problems.
For all these-the-art results, we also present the results for all these-the-art problems.
arXiv Detail & Related papers (2021-04-30T08:50:24Z) - Convergence Analysis of Nonconvex Distributed Stochastic Zeroth-order
Coordinate Method [3.860616339202303]
This paper investigates the distributed non optimization problem of minimizing a global cost function formed by the summation of $ZOn$ local cost functions.
Agents approximate their own ZO coordinate method to solve the problem.
arXiv Detail & Related papers (2021-03-24T03:07:46Z) - On the Almost Sure Convergence of Stochastic Gradient Descent in
Non-Convex Problems [75.58134963501094]
This paper analyzes the trajectories of gradient descent (SGD)
We show that SGD avoids saddle points/manifolds with $1$ for strict step-size policies.
arXiv Detail & Related papers (2020-06-19T14:11:26Z) - S-ADDOPT: Decentralized stochastic first-order optimization over
directed graphs [16.96562173221624]
Decentralized convex optimization is proposed to minimize a sum of smooth and strongly cost functions when the functions are distributed over a directed network nodes.
In particular, we propose thetextbftextttS-ADDOPT algorithm that assumes a first-order oracle at each node.
For decaying step-sizes$mathcalO (1/k)$, we show thattextbfttS-ADDOPT reaches the exact solution subly at$mathcalO (1/k)$ and its convergence is networkally-independent
arXiv Detail & Related papers (2020-05-15T21:14:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.