No-Regret Dynamics in the Fenchel Game: A Unified Framework for
Algorithmic Convex Optimization
- URL: http://arxiv.org/abs/2111.11309v1
- Date: Mon, 22 Nov 2021 16:10:18 GMT
- Title: No-Regret Dynamics in the Fenchel Game: A Unified Framework for
Algorithmic Convex Optimization
- Authors: Jun-Kun Wang and Jacob Abernethy and Kfir Y. Levy
- Abstract summary: We develop an algorithmic framework for solving convex optimization problems using no-regret game dynamics.
A common choice for these strategies are so-called no-regret learning algorithms.
We show that many classical first-order methods for convex optimization can be interpreted as special cases of our framework.
- Score: 20.718016474717196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop an algorithmic framework for solving convex optimization problems
using no-regret game dynamics. By converting the problem of minimizing a convex
function into an auxiliary problem of solving a min-max game in a sequential
fashion, we can consider a range of strategies for each of the two-players who
must select their actions one after the other. A common choice for these
strategies are so-called no-regret learning algorithms, and we describe a
number of such and prove bounds on their regret. We then show that many
classical first-order methods for convex optimization -- including
average-iterate gradient descent, the Frank-Wolfe algorithm, the Heavy Ball
algorithm, and Nesterov's acceleration methods -- can be interpreted as special
cases of our framework as long as each player makes the correct choice of
no-regret strategy. Proving convergence rates in this framework becomes very
straightforward, as they follow from plugging in the appropriate known regret
bounds. Our framework also gives rise to a number of new first-order methods
for special cases of convex optimization that were not previously known.
Related papers
- Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Invex Programs: First Order Algorithms and Their Convergence [66.40124280146863]
Invex programs are a special kind of non-constrained problems which attain global minima at every stationary point.
We propose new first-order algorithms to solve general convergence rates in beyondvex problems.
Our proposed algorithm is the first algorithm to solve constrained invex programs.
arXiv Detail & Related papers (2023-07-10T10:11:01Z) - Faster Margin Maximization Rates for Generic and Adversarially Robust Optimization Methods [20.118513136686452]
First-order optimization methods tend to inherently favor certain solutions over others when minimizing an underdetermined training objective.
We present a series of state-of-the-art implicit bias rates for mirror descent and steepest descent algorithms.
Our accelerated rates are derived by leveraging the regret bounds of online learning algorithms within this game framework.
arXiv Detail & Related papers (2023-05-27T18:16:56Z) - Infeasible Deterministic, Stochastic, and Variance-Reduction Algorithms for Optimization under Orthogonality Constraints [9.301728976515255]
This article provides new practical and theoretical developments for the landing algorithm.
First, the method is extended to the Stiefel manifold.
We also consider variance reduction algorithms when the cost function is an average of many functions.
arXiv Detail & Related papers (2023-03-29T07:36:54Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - The Curse of Unrolling: Rate of Differentiating Through Optimization [35.327233435055305]
Un differentiation approximates the solution using an iterative solver and differentiates it through the computational path.
We show that we can either 1) choose a large learning rate leading to a fast convergence but accept that the algorithm may have an arbitrarily long burn-in phase or 2) choose a smaller learning rate leading to an immediate but slower convergence.
arXiv Detail & Related papers (2022-09-27T09:27:29Z) - Understanding Modern Techniques in Optimization: Frank-Wolfe, Nesterov's
Momentum, and Polyak's Momentum [8.515692980023948]
We develop a modular framework that can serve as a recipe for constructing and analyzing iterative algorithms for convex optimization.
We show that our approach leads to three new fast FrankWolf Nesterov algorithms for some constraint sets.
In the second part, we develop a modular analysis of Polyak momentum for certain problems.
arXiv Detail & Related papers (2021-06-23T17:53:39Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Stochastic Regret Minimization in Extensive-Form Games [109.43344748069933]
Monte-Carlo counterfactual regret minimization (MCCFR) is the state-of-the-art algorithm for solving sequential games that are too large for full trees.
We develop a new framework for developing regret minimization methods.
We show extensive experiments on three games, where some variants of our methods outperform MCCFR.
arXiv Detail & Related papers (2020-02-19T23:05:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.