Curriculum learning for multilevel budgeted combinatorial problems
- URL: http://arxiv.org/abs/2007.03151v2
- Date: Mon, 26 Oct 2020 12:23:53 GMT
- Title: Curriculum learning for multilevel budgeted combinatorial problems
- Authors: Adel Nabli, Margarida Carvalho
- Abstract summary: Multilevel optimization problems are their generalization, encompassing situations with multiple players taking decisions sequentially.
We devise a value-based method to learn to solve multilevel budgeted problems involving two players in a zero-sum game over a graph.
Our framework is based on a simple curriculum: if an agent knows how to estimate the value of instances with budgets up to $B$, then solving instances with budget $B+1$ can be done in time regardless of the direction of every possible afterstate.
- Score: 7.804994311050265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning heuristics for combinatorial optimization problems through graph
neural networks have recently shown promising results on some classic NP-hard
problems. These are single-level optimization problems with only one player.
Multilevel combinatorial optimization problems are their generalization,
encompassing situations with multiple players taking decisions sequentially. By
framing them in a multi-agent reinforcement learning setting, we devise a
value-based method to learn to solve multilevel budgeted combinatorial problems
involving two players in a zero-sum game over a graph. Our framework is based
on a simple curriculum: if an agent knows how to estimate the value of
instances with budgets up to $B$, then solving instances with budget $B+1$ can
be done in polynomial time regardless of the direction of the optimization by
checking the value of every possible afterstate. Thus, in a bottom-up approach,
we generate datasets of heuristically solved instances with increasingly larger
budgets to train our agent. We report results close to optimality on graphs up
to $100$ nodes and a $185 \times$ speedup on average compared to the quickest
exact solver known for the Multilevel Critical Node problem, a max-min-max
trilevel problem that has been shown to be at least $\Sigma_2^p$-hard.
Related papers
- Solving a Class of Non-Convex Minimax Optimization in Federated Learning [84.98927714326908]
The minimax problems arise throughout machine learning applications, ranging from machine learning training to large-scale learning.
We propose a class of algorithms for non minimax problems (emphi) that reduce complexity to $varepsilon-6)$.
We prove that FedSGDA-M has the best sample complexity of $O(kappa2-3)$ and the best-known communication of $O(kappa2-3)$.
arXiv Detail & Related papers (2023-10-05T15:48:41Z) - Landscape Surrogate: Learning Decision Losses for Mathematical
Optimization Under Partial Information [48.784330281177446]
Recent works in learning-integrated optimization have shown promise in settings where the optimization is only partially observed or where general-purposes perform poorly without expert tuning.
We propose using a smooth and learnable Landscape Surrogate as a replacement for $fcirc mathbfg$.
This surrogate, learnable by neural networks, can be computed faster than the $mathbfg$ solver, provides dense and smooth gradients during training, can generalize to unseen optimization problems, and is efficiently learned via alternating optimization.
arXiv Detail & Related papers (2023-07-18T04:29:16Z) - Computing Star Discrepancies with Numerical Black-Box Optimization
Algorithms [56.08144272945755]
We compare 8 popular numerical black-box optimization algorithms on the $L_infty$ star discrepancy problem.
We show that all used solvers perform very badly on a large majority of the instances.
We suspect that state-of-the-art numerical black-box optimization techniques fail to capture the global structure of the problem.
arXiv Detail & Related papers (2023-06-29T14:57:56Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - A Gradient Method for Multilevel Optimization [8.80899367147235]
We have developed a gradient-based algorithm for multilevel $n$ levels based on Franceschi et al.'s idea.
As far as we know, this is one of the first algorithms with some theoretical guarantee for multilevel optimization.
arXiv Detail & Related papers (2021-05-28T16:22:10Z) - Efficient Online-Bandit Strategies for Minimax Learning Problems [21.300877551771197]
Several learning problems involve solving min-max problems, e.g., empirical distributional robust learning or minimization with non-standard aggregated losses.
More specifically, these problems are convex-linear problems where the learning is carried out over the model parameters $winmathcalW$ and over the empirical distribution $pinmathcalK$ of the training set.
To design efficient methods, we let an online learning algorithm play against a (combinatorial) bandit algorithm.
arXiv Detail & Related papers (2021-05-28T16:01:42Z) - Randomized Stochastic Variance-Reduced Methods for Stochastic Bilevel
Optimization [62.87181271021217]
We consider non-SBO problems that have many applications in machine learning.
This paper proposes fast randomized algorithms for non-SBO problems.
arXiv Detail & Related papers (2021-05-05T18:28:42Z) - Learning from Survey Propagation: a Neural Network for MAX-E-$3$-SAT [0.0]
This paper presents a new algorithm for computing approximate solutions in $Theta(N)$ for the Maximum Exact 3-Satisfiability (MAX-E-$3$-SAT) problem.
arXiv Detail & Related papers (2020-12-10T07:59:54Z) - Nonconvex Zeroth-Order Stochastic ADMM Methods with Lower Function Query
Complexity [109.54166127479093]
Zeroth-order (a.k.a, derivative-free) methods are a class of effective optimization methods for solving machine learning problems.
In this paper, we propose a class faster faster zerothorder alternating gradient method multipliers (MMADMM) to solve the non finitesum problems.
We show that ZOMMAD methods can achieve a lower function $O(frac13nfrac1)$ for finding an $epsilon$-stationary point.
At the same time, we propose a class faster zerothorder online ADM methods (M
arXiv Detail & Related papers (2019-07-30T02:21:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.