Lyapunov function approach for approximation algorithm design and
analysis: with applications in submodular maximization
- URL: http://arxiv.org/abs/2205.12442v1
- Date: Wed, 25 May 2022 02:09:56 GMT
- Title: Lyapunov function approach for approximation algorithm design and
analysis: with applications in submodular maximization
- Authors: Donglei Du
- Abstract summary: We propose a two-phase systematical framework for approximation algorithm design and analysis via Lyapunov function.
The first phase consists of using Lyapunov function as a guideline to design a continuous-time algorithm with provable approximation ratio.
The second phase then converts the continuous-time algorithm to a discrete-time algorithm with the same approximation ratio and a provable time.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a two-phase systematical framework for approximation algorithm
design and analysis via Lyapunov function. The first phase consists of using
Lyapunov function as a guideline to design a continuous-time algorithm with
provable approximation ratio. The second phase then converts the
continuous-time algorithm to a discrete-time algorithm with the same
approximation ratio and a provable time complexity. Some immediate benefits of
the Lyapunov function approach include: (i) unifying many existing algorithms;
(ii) providing a guideline to design and analyze new algorithms; and (iii)
offer new perspectives to potentially improve existing algorithms. We use
various submodular maximization problems as running examples to illustrate our
framework.
Related papers
- Optimization Algorithm Design via Electric Circuits [13.101300549333297]
We present a novel methodology for convex optimization algorithm design using ideas from electric RLC circuits.
The first stage of the methodology is to design an appropriate electric circuit whose continuous-time dynamics converge to the solution of the optimization problem at hand.
The second stage is an automated, computer-assisted discretization of the continuous-time dynamics, yielding a provably convergent discrete-time algorithm.
arXiv Detail & Related papers (2024-11-04T20:10:59Z) - Regularized Q-Learning with Linear Function Approximation [2.765106384328772]
We consider a bi-level optimization formulation of regularized Q-learning with linear functional approximation.
We show that, under certain assumptions, the proposed algorithm converges to a stationary point in the presence of Markovian noise.
arXiv Detail & Related papers (2024-01-26T20:45:40Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Linearization Algorithms for Fully Composite Optimization [61.20539085730636]
This paper studies first-order algorithms for solving fully composite optimization problems convex compact sets.
We leverage the structure of the objective by handling differentiable and non-differentiable separately, linearizing only the smooth parts.
arXiv Detail & Related papers (2023-02-24T18:41:48Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Reinforcement Learning with Unbiased Policy Evaluation and Linear
Function Approximation [11.345796608258434]
We provide performance guarantees for a variant of simulation-based policy iteration for controlling Markov decision processes.
We analyze two algorithms; the first algorithm involves a least squares approach where a new set of weights associated with feature vectors is obtained via at least squares at each iteration.
The second algorithm involves a two-time-scale approximation algorithm taking several steps of gradient descent towards the least squares solution.
arXiv Detail & Related papers (2022-10-13T20:16:19Z) - Two-Timescale Stochastic Approximation for Bilevel Optimisation Problems
in Continuous-Time Models [0.0]
We analyse the properties of a continuous-time, two-timescale approximation algorithm designed for bilevel optimisation problems in continuous-time models.
We obtain the weak convergence rate of this algorithm in the form of a central limit theorem.
arXiv Detail & Related papers (2022-06-14T17:12:28Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - A Unifying View of Optimism in Episodic Reinforcement Learning [18.73198634652064]
This paper provides a framework for designing, analyzing and implementing optimistic reinforcement learning algorithms.
Every model-optimistic algorithm that constructs an optimistic MDP has an equivalent representation as a value-optimistic dynamic programming algorithm.
arXiv Detail & Related papers (2020-07-03T18:10:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.