Gradient Descent Averaging and Primal-dual Averaging for Strongly Convex
Optimization
- URL: http://arxiv.org/abs/2012.14558v2
- Date: Sun, 17 Jan 2021 08:08:24 GMT
- Title: Gradient Descent Averaging and Primal-dual Averaging for Strongly Convex
Optimization
- Authors: Wei Tao, Wei Li, Zhisong Pan, Qing Tao
- Abstract summary: We develop gradient descent averaging and primal-dual averaging algorithms for strongly convex cases.
We prove that primal-dual averaging yields the optimal convergence rate in terms of output averaging, while SC-PDA derives the optimal individual convergence.
Several experiments on SVMs and deep learning models validate the correctness of theoretical analysis and effectiveness of algorithms.
- Score: 15.731908248435348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Averaging scheme has attracted extensive attention in deep learning as well
as traditional machine learning. It achieves theoretically optimal convergence
and also improves the empirical model performance. However, there is still a
lack of sufficient convergence analysis for strongly convex optimization.
Typically, the convergence about the last iterate of gradient descent methods,
which is referred to as individual convergence, fails to attain its optimality
due to the existence of logarithmic factor. In order to remove this factor, we
first develop gradient descent averaging (GDA), which is a general
projection-based dual averaging algorithm in the strongly convex setting. We
further present primal-dual averaging for strongly convex cases (SC-PDA), where
primal and dual averaging schemes are simultaneously utilized. We prove that
GDA yields the optimal convergence rate in terms of output averaging, while
SC-PDA derives the optimal individual convergence. Several experiments on SVMs
and deep learning models validate the correctness of theoretical analysis and
effectiveness of algorithms.
Related papers
- Gradient Normalization with(out) Clipping Ensures Convergence of Nonconvex SGD under Heavy-Tailed Noise with Improved Results [60.92029979853314]
This paper investigates Gradient Normalization without (NSGDC) its gradient reduction variant (NSGDC-VR)
We present significant improvements in the theoretical results for both algorithms.
arXiv Detail & Related papers (2024-10-21T22:40:42Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Versatile Single-Loop Method for Gradient Estimator: First and Second
Order Optimality, and its Application to Federated Learning [45.78238792836363]
We present a single-loop algorithm named SLEDGE (Single-Loop-E Gradient Estimator) for periodic convergence.
Unlike existing methods, SLEDGE has the advantage of versatility; (ii) second-order optimal, (ii) in the PL region, and (iii) smaller complexity under less of data.
arXiv Detail & Related papers (2022-09-01T11:05:26Z) - Stability and Generalization for Randomized Coordinate Descent [19.687456295228156]
There is no work studying how the models trained by RCD would generalize to test examples.
In this paper, we initialize the generalization analysis of RCD by leveraging the powerful tool of algorithmic stability.
Our analysis shows that RCD enjoys better stability as compared to gradient descent.
arXiv Detail & Related papers (2021-08-17T02:52:50Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - The Role of Momentum Parameters in the Optimal Convergence of Adaptive
Polyak's Heavy-ball Methods [12.93796690939018]
We prove that the adaptive Polyak's Heavy-ball (HB) method attains an optimal individual convergence rate of $O(frac1sqrtt)$.
Our new analysis shows how the HB momentum and its time-varying weight help us to achieve the acceleration in convex optimization.
arXiv Detail & Related papers (2021-02-15T02:57:14Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Bilevel Optimization: Convergence Analysis and Enhanced Design [63.64636047748605]
Bilevel optimization is a tool for many machine learning problems.
We propose a novel stoc-efficientgradient estimator named stoc-BiO.
arXiv Detail & Related papers (2020-10-15T18:09:48Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z) - The Strength of Nesterov's Extrapolation in the Individual Convergence
of Nonsmooth Optimization [0.0]
We prove that Nesterov's extrapolation has the strength to make the individual convergence of gradient descent methods optimal for nonsmooth problems.
We give an extension of the derived algorithms to solve regularized learning tasks with nonsmooth losses in settings.
Our method is applicable as an efficient tool for solving large-scale $l$1-regularized hinge-loss learning problems.
arXiv Detail & Related papers (2020-06-08T03:35:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.