Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates
- URL: http://arxiv.org/abs/2307.08816v2
- Date: Tue, 27 Feb 2024 18:55:22 GMT
- Title: Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates
- Authors: Kyle Mana, Fernando Acero, Stephen Mak, Parisa Zehtabi, Michael
Cashmore, Daniele Magazzeni, Manuela Veloso
- Abstract summary: A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
- Score: 49.84541884653309
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Discrete optimization belongs to the set of $\mathcal{NP}$-hard problems,
spanning fields such as mixed-integer programming and combinatorial
optimization. A current standard approach to solving convex discrete
optimization problems is the use of cutting-plane algorithms, which reach
optimal solutions by iteratively adding inequalities known as \textit{cuts} to
refine a feasible set. Despite the existence of a number of general-purpose
cut-generating algorithms, large-scale discrete optimization problems continue
to suffer from intractability. In this work, we propose a method for
accelerating cutting-plane algorithms via reinforcement learning. Our approach
uses learned policies as surrogates for $\mathcal{NP}$-hard elements of the cut
generating procedure in a way that (i) accelerates convergence, and (ii)
retains guarantees of optimality. We apply our method on two types of problems
where cutting-plane algorithms are commonly used: stochastic optimization, and
mixed-integer quadratic programming. We observe the benefits of our method when
applied to Benders decomposition (stochastic optimization) and iterative loss
approximation (quadratic programming), achieving up to $45\%$ faster average
convergence when compared to modern alternative algorithms.
Related papers
- A Pure Quantum Approximate Optimization Algorithm Based on CNR Operation [0.0]
We propose a general-purpose pure quantum approximate optimization algorithm.
The algorithm is constructed to a $p$-level divide-and-conquer structure.
We show the algorithm performance in detail when the required qubits number of the two optimization problems is 10.
arXiv Detail & Related papers (2023-10-27T06:54:39Z) - Accelerated First-Order Optimization under Nonlinear Constraints [73.2273449996098]
We exploit between first-order algorithms for constrained optimization and non-smooth systems to design a new class of accelerated first-order algorithms.
An important property of these algorithms is that constraints are expressed in terms of velocities instead of sparse variables.
arXiv Detail & Related papers (2023-02-01T08:50:48Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Optimizing Optimizers: Regret-optimal gradient descent algorithms [9.89901717499058]
We study the existence, uniqueness and consistency of regret-optimal algorithms.
By providing first-order optimality conditions for the control problem, we show that regret-optimal algorithms must satisfy a specific structure in their dynamics.
We present fast numerical methods for approximating them, generating optimization algorithms which directly optimize their long-term regret.
arXiv Detail & Related papers (2020-12-31T19:13:53Z) - Optimal and Practical Algorithms for Smooth and Strongly Convex
Decentralized Optimization [21.555331273873175]
We consider the task of decentralized minimization of the sum of smooth strongly convex functions stored across the nodes of a network.
We propose two new algorithms for this decentralized optimization problem and equip them with complexity guarantees.
arXiv Detail & Related papers (2020-06-21T11:23:20Z) - Effective Dimension Adaptive Sketching Methods for Faster Regularized
Least-Squares Optimization [56.05635751529922]
We propose a new randomized algorithm for solving L2-regularized least-squares problems based on sketching.
We consider two of the most popular random embeddings, namely, Gaussian embeddings and the Subsampled Randomized Hadamard Transform (SRHT)
arXiv Detail & Related papers (2020-06-10T15:00:09Z) - Private Stochastic Convex Optimization: Optimal Rates in Linear Time [74.47681868973598]
We study the problem of minimizing the population loss given i.i.d. samples from a distribution over convex loss functions.
A recent work of Bassily et al. has established the optimal bound on the excess population loss achievable given $n$ samples.
We describe two new techniques for deriving convex optimization algorithms both achieving the optimal bound on excess loss and using $O(minn, n2/d)$ gradient computations.
arXiv Detail & Related papers (2020-05-10T19:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.