Transient growth of accelerated first-order methods for strongly convex
optimization problems
- URL: http://arxiv.org/abs/2103.08017v1
- Date: Sun, 14 Mar 2021 20:01:14 GMT
- Title: Transient growth of accelerated first-order methods for strongly convex
optimization problems
- Authors: Hesameddin Mohammadi, Samantha Samuelson, Mihailo R. Jovanovi\'c
- Abstract summary: In this paper, we examine the transient behavior of accelerated first-order optimization algorithms.
For quadratic optimization problems, we employ tools from linear systems theory to show that transient growth arises from the presence of non-normal dynamics.
For strongly convex smooth optimization problems, we utilize the theory of integral quadratic constraints to establish an upper bound on the magnitude of the transient response of Nesterov's accelerated method.
- Score: 1.6114012813668934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimization algorithms are increasingly being used in applications with
limited time budgets. In many real-time and embedded scenarios, only a few
iterations can be performed and traditional convergence metrics cannot be used
to evaluate performance in these non-asymptotic regimes. In this paper, we
examine the transient behavior of accelerated first-order optimization
algorithms. For quadratic optimization problems, we employ tools from linear
systems theory to show that transient growth arises from the presence of
non-normal dynamics. We identify the existence of modes that yield an algebraic
growth in early iterations and quantify the transient excursion from the
optimal solution caused by these modes. For strongly convex smooth optimization
problems, we utilize the theory of integral quadratic constraints to establish
an upper bound on the magnitude of the transient response of Nesterov's
accelerated method. We show that both the Euclidean distance between the
optimization variable and the global minimizer and the rise time to the
transient peak are proportional to the square root of the condition number of
the problem. Finally, for problems with large condition numbers, we demonstrate
tightness of the bounds that we derive up to constant factors.
Related papers
- The inexact power augmented Lagrangian method for constrained nonconvex optimization [44.516958213972885]
This work introduces an unconventional augmented Lagrangian term, where the augmenting term is a Euclidean norm raised to a power.
We show that using lower powers for augmenting term to faster rate, albeit with a slower decrease in residual.
Our results further show that using lower powers for augmenting term to faster rate, albeit with a slower decrease in residual.
arXiv Detail & Related papers (2024-10-26T11:31:56Z) - From exponential to finite/fixed-time stability: Applications to optimization [0.0]
Given an exponentially stable optimization algorithm, can it be modified to obtain a finite/fixed-time stable algorithm?
We provide an affirmative answer, demonstrate how the solution can be computed on a finite-time interval via a simple scaling of the right-hand-side of the original dynamics.
We certify the desired properties of the modified algorithm using the Lyapunov function that proves exponential stability of the original system.
arXiv Detail & Related papers (2024-09-18T05:43:22Z) - ODE-based Learning to Optimize [28.380622776436905]
We present a comprehensive framework integrating the inertial systems with Hessian-driven damping equation (ISHD)
We formulate a novel learning to optimize (L2O) problem aimed at minimizing the stopping time subject to the convergence and stability condition.
Empirical validation of our framework is conducted through extensive numerical experiments across a diverse set of optimization problems.
arXiv Detail & Related papers (2024-06-04T06:39:45Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast
Evasion of Non-Degenerate Saddle Points [8.452349885923507]
Gradient-based first-order convex optimization algorithms find widespread applicability in a variety of domains, including machine learning tasks.
Motivated by the recent advances in fixed-time theory of optimal time, we introduce a framework for designing accelerated optimization algorithms.
For functions that admit non-de saddle-points, we show that the time required to evade these saddle-points is uniformly bounded for all initial conditions.
arXiv Detail & Related papers (2022-12-07T16:36:23Z) - Breaking the Convergence Barrier: Optimization via Fixed-Time Convergent
Flows [4.817429789586127]
We introduce a Poly-based optimization framework for achieving acceleration, based on the notion of fixed-time stability dynamical systems.
We validate the accelerated convergence properties of the proposed schemes on a range of numerical examples against the state-of-the-art optimization algorithms.
arXiv Detail & Related papers (2021-12-02T16:04:40Z) - Faster Algorithm and Sharper Analysis for Constrained Markov Decision
Process [56.55075925645864]
The problem of constrained decision process (CMDP) is investigated, where an agent aims to maximize the expected accumulated discounted reward subject to multiple constraints.
A new utilities-dual convex approach is proposed with novel integration of three ingredients: regularized policy, dual regularizer, and Nesterov's gradient descent dual.
This is the first demonstration that nonconcave CMDP problems can attain the lower bound of $mathcal O (1/epsilon)$ for all complexity optimization subject to convex constraints.
arXiv Detail & Related papers (2021-10-20T02:57:21Z) - Optimization on manifolds: A symplectic approach [127.54402681305629]
We propose a dissipative extension of Dirac's theory of constrained Hamiltonian systems as a general framework for solving optimization problems.
Our class of (accelerated) algorithms are not only simple and efficient but also applicable to a broad range of contexts.
arXiv Detail & Related papers (2021-07-23T13:43:34Z) - Acceleration Methods [57.202881673406324]
We first use quadratic optimization problems to introduce two key families of acceleration methods.
We discuss momentum methods in detail, starting with the seminal work of Nesterov.
We conclude by discussing restart schemes, a set of simple techniques for reaching nearly optimal convergence rates.
arXiv Detail & Related papers (2021-01-23T17:58:25Z) - An Asymptotically Optimal Primal-Dual Incremental Algorithm for
Contextual Linear Bandits [129.1029690825929]
We introduce a novel algorithm improving over the state-of-the-art along multiple dimensions.
We establish minimax optimality for any learning horizon in the special case of non-contextual linear bandits.
arXiv Detail & Related papers (2020-10-23T09:12:47Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.