On The Convergence of Euler Discretization of Finite-Time Convergent Gradient Flows
- URL: http://arxiv.org/abs/2010.02990v5
- Date: Tue, 18 Mar 2025 09:03:49 GMT
- Title: On The Convergence of Euler Discretization of Finite-Time Convergent Gradient Flows
- Authors: Siqi Zhang, Mouhacine Benosman, Orlando Romero,
- Abstract summary: We investigate the performance of two novel first-order optimization algorithms, namely the rescaled-gradient flow (RGF) and the signed-gradient flow (SGF)<n>These algorithms are derived from the forward discretization of finite-time convergent flows, comprised of non-Lipschitz dynamical systems, which locally converge to the minima of gradient-linear functions.
- Score: 4.401622714202886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we investigate the performance of two novel first-order optimization algorithms, namely the rescaled-gradient flow (RGF) and the signed-gradient flow (SGF). These algorithms are derived from the forward Euler discretization of finite-time convergent flows, comprised of non-Lipschitz dynamical systems, which locally converge to the minima of gradient-dominated functions. We first characterize the closeness between the continuous flows and the discretizations, then we proceed to present (linear) convergence guarantees of the discrete algorithms (in the general and the stochastic case). Furthermore, in cases where problem parameters remain unknown or exhibit non-uniformity, we further integrate the line-search strategy with RGF/SGF and provide convergence analysis in this setting. We then apply the proposed algorithms to academic examples and deep neural network training, our results show that our schemes demonstrate faster convergences against standard optimization alternatives.
Related papers
- Fast Two-Time-Scale Stochastic Gradient Method with Applications in Reinforcement Learning [5.325297567945828]
We propose a new method for two-time-scale optimization that achieves significantly faster convergence than the prior arts.
We characterize the proposed algorithm under various conditions and show how it specializes on online sample-based methods.
arXiv Detail & Related papers (2024-05-15T19:03:08Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Reweighted Interacting Langevin Diffusions: an Accelerated Sampling
Methodfor Optimization [28.25662317591378]
We propose a new technique to accelerate sampling methods for solving difficult optimization problems.
Our method investigates the connection between posterior distribution sampling and optimization with Langevin dynamics.
arXiv Detail & Related papers (2023-01-30T03:48:20Z) - Generalized Gradient Flows with Provable Fixed-Time Convergence and Fast
Evasion of Non-Degenerate Saddle Points [8.452349885923507]
Gradient-based first-order convex optimization algorithms find widespread applicability in a variety of domains, including machine learning tasks.
Motivated by the recent advances in fixed-time theory of optimal time, we introduce a framework for designing accelerated optimization algorithms.
For functions that admit non-de saddle-points, we show that the time required to evade these saddle-points is uniformly bounded for all initial conditions.
arXiv Detail & Related papers (2022-12-07T16:36:23Z) - First-Order Algorithms for Nonlinear Generalized Nash Equilibrium
Problems [88.58409977434269]
We consider the problem of computing an equilibrium in a class of nonlinear generalized Nash equilibrium problems (NGNEPs)
Our contribution is to provide two simple first-order algorithmic frameworks based on the quadratic penalty method and the augmented Lagrangian method.
We provide nonasymptotic theoretical guarantees for these algorithms.
arXiv Detail & Related papers (2022-04-07T00:11:05Z) - Convergence Rates of Two-Time-Scale Gradient Descent-Ascent Dynamics for
Solving Nonconvex Min-Max Problems [2.0305676256390934]
We characterize the finite-time performance of the continuous-time variant of simultaneous gradient descent-ascent algorithm.
Our results on the behavior of continuous-time algorithm may be used to enhance the convergence properties of its discrete-time counterpart.
arXiv Detail & Related papers (2021-12-17T15:51:04Z) - Breaking the Convergence Barrier: Optimization via Fixed-Time Convergent
Flows [4.817429789586127]
We introduce a Poly-based optimization framework for achieving acceleration, based on the notion of fixed-time stability dynamical systems.
We validate the accelerated convergence properties of the proposed schemes on a range of numerical examples against the state-of-the-art optimization algorithms.
arXiv Detail & Related papers (2021-12-02T16:04:40Z) - Second-Order Neural ODE Optimizer [11.92713188431164]
We show that a specific continuous-time OC methodology, called Differential Programming, can be adopted to derive backward ODEs for higher-order derivatives at the same O(1) memory cost.
The resulting method converges much faster than first-order baselines in wall-clock time.
Our framework also enables direct architecture optimization, such as the integration time of Neural ODEs, with second-order feedback policies.
arXiv Detail & Related papers (2021-09-29T02:58:18Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - A Contraction Theory Approach to Optimization Algorithms from
Acceleration Flows [1.90365714903665]
We use contraction theory to provide a principled methodology to design and discretize appropriate ODEs.
We propose a novel system of ODEs, namely the Accelerated-Contracting-Nesterov flow.
Remarkably, a simple explicit Euler discretization of this flow corresponds to the Nesterov acceleration method.
arXiv Detail & Related papers (2021-05-18T21:11:37Z) - Fast Objective & Duality Gap Convergence for Non-Convex Strongly-Concave
Min-Max Problems with PL Condition [52.08417569774822]
This paper focuses on methods for solving smooth non-concave min-max problems, which have received increasing attention due to deep learning (e.g., deep AUC)
arXiv Detail & Related papers (2020-06-12T00:32:21Z) - IDEAL: Inexact DEcentralized Accelerated Augmented Lagrangian Method [64.15649345392822]
We introduce a framework for designing primal methods under the decentralized optimization setting where local functions are smooth and strongly convex.
Our approach consists of approximately solving a sequence of sub-problems induced by the accelerated augmented Lagrangian method.
When coupled with accelerated gradient descent, our framework yields a novel primal algorithm whose convergence rate is optimal and matched by recently derived lower bounds.
arXiv Detail & Related papers (2020-06-11T18:49:06Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Optimization with Momentum: Dynamical, Control-Theoretic, and Symplectic
Perspectives [97.16266088683061]
The article rigorously establishes why symplectic discretization schemes are important for momentum-based optimization algorithms.
It provides a characterization of algorithms that exhibit accelerated convergence.
arXiv Detail & Related papers (2020-02-28T00:32:47Z) - Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart
for Nonconvex Optimization [73.38702974136102]
Various types of parameter restart schemes have been proposed for accelerated algorithms to facilitate their practical convergence in rates.
In this paper, we propose an algorithm for solving nonsmooth problems.
arXiv Detail & Related papers (2020-02-26T16:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.