A Unified Analysis for the Subgradient Methods Minimizing Composite
Nonconvex, Nonsmooth and Non-Lipschitz Functions
- URL: http://arxiv.org/abs/2308.16362v1
- Date: Wed, 30 Aug 2023 23:34:11 GMT
- Title: A Unified Analysis for the Subgradient Methods Minimizing Composite
Nonconvex, Nonsmooth and Non-Lipschitz Functions
- Authors: Daoli Zhu and Lei Zhao and Shuzhong Zhang
- Abstract summary: We present a novel convergence analysis in context of non-Lipschitz and nonsmooth optimization problems.
Under any of the subgradient upper bounding conditions to be introduced in the paper, we show that $O (1/stqrT)$ holds in terms of the square gradient of the envelope function, which further improves to be $O (1/T)$ if, in addition, the uniform KL condition with $1/2$ exponents holds.
- Score: 8.960341489080609
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we propose a proximal subgradient method (Prox-SubGrad) for
solving nonconvex and nonsmooth optimization problems without assuming
Lipschitz continuity conditions. A number of subgradient upper bounds and their
relationships are presented. By means of these upper bounding conditions, we
establish some uniform recursive relations for the Moreau envelopes for weakly
convex optimization. This uniform scheme simplifies and unifies the proof
schemes to establish rate of convergence for Prox-SubGrad without assuming
Lipschitz continuity. We present a novel convergence analysis in this context.
Furthermore, we propose some new stochastic subgradient upper bounding
conditions and establish convergence and iteration complexity rates for the
stochastic subgradient method (Sto-SubGrad) to solve non-Lipschitz and
nonsmooth stochastic optimization problems. In particular, for both
deterministic and stochastic subgradient methods on weakly convex optimization
problems without Lipschitz continuity, under any of the subgradient upper
bounding conditions to be introduced in the paper, we show that $O(1/\sqrt{T})$
convergence rate holds in terms of the square of gradient of the Moreau
envelope function, which further improves to be $O(1/{T})$ if, in addition, the
uniform KL condition with exponent $1/2$ holds.
Related papers
- Inexact subgradient methods for semialgebraic functions [18.293072574300798]
Motivated by the widespread use of approximate derivatives in machine learning and machine learning optimization, we inexact subient methods with non-vanishing errors.
arXiv Detail & Related papers (2024-04-30T12:47:42Z) - Riemannian stochastic optimization methods avoid strict saddle points [68.80251170757647]
We show that policies under study avoid strict saddle points / submanifolds with probability 1.
This result provides an important sanity check as it shows that, almost always, the limit state of an algorithm can only be a local minimizer.
arXiv Detail & Related papers (2023-11-04T11:12:24Z) - Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach [57.92727189589498]
We propose an online convex optimization approach with two different levels of adaptivity.
We obtain $mathcalO(log V_T)$, $mathcalO(d log V_T)$ and $hatmathcalO(sqrtV_T)$ regret bounds for strongly convex, exp-concave and convex loss functions.
arXiv Detail & Related papers (2023-07-17T09:55:35Z) - Convex and Non-convex Optimization Under Generalized Smoothness [69.69521650503431]
An analysis of convex and non- optimization methods often requires the Lipsitzness gradient, which limits the analysis by this trajectorys.
Recent work generalizes the gradient setting via the non-uniform smoothness condition.
arXiv Detail & Related papers (2023-06-02T04:21:59Z) - Revisiting Subgradient Method: Complexity and Convergence Beyond Lipschitz Continuity [24.45688490844496]
Subgradient method is one of the most fundamental algorithmic schemes for nonsmooth optimization.
In this work, we first extend the typical iteration complexity results for the subgradient method to cover non-Lipschitz convex and weakly convex minimization.
arXiv Detail & Related papers (2023-05-23T15:26:36Z) - Convergence of Adam Under Relaxed Assumptions [72.24779199744954]
We show that Adam converges to $epsilon$-stationary points with $O(epsilon-4)$ gradient complexity under far more realistic conditions.
We also propose a variance-reduced version of Adam with an accelerated gradient complexity of $O(epsilon-3)$.
arXiv Detail & Related papers (2023-04-27T06:27:37Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - Randomized Coordinate Subgradient Method for Nonsmooth Composite
Optimization [11.017632675093628]
Coordinate-type subgradient methods for addressing nonsmooth problems are relatively underexplored due to the set of properties of the Lipschitz-type assumption.
arXiv Detail & Related papers (2022-06-30T02:17:11Z) - Escaping Saddle Points for Nonsmooth Weakly Convex Functions via
Perturbed Proximal Algorithms [0.0]
Main results are based on a novel characterization of $epsilon$-approximate local minimum for nonsmooth functions.
We show that under standard assumptions, the perturbed proximal point, perturbed proximal gradient and perturbed proximal linear algorithms find $epsilon$-approximate local minimum for nonsmooth weakly convex functions.
arXiv Detail & Related papers (2021-02-04T19:17:13Z) - Efficient Methods for Structured Nonconvex-Nonconcave Min-Max
Optimization [98.0595480384208]
We propose a generalization extraient spaces which converges to a stationary point.
The algorithm applies not only to general $p$-normed spaces, but also to general $p$-dimensional vector spaces.
arXiv Detail & Related papers (2020-10-31T21:35:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.