TreeDQN: Learning to minimize Branch-and-Bound tree
- URL: http://arxiv.org/abs/2306.05905v1
- Date: Fri, 9 Jun 2023 14:01:26 GMT
- Title: TreeDQN: Learning to minimize Branch-and-Bound tree
- Authors: Dmitry Sorokin, Alexander Kostin
- Abstract summary: Branch-and-Bound is a convenient approach to solving optimization tasks in the form of Mixed Linear Programs.
The efficiency of the solver depends on the branchning used to select a variable for splitting.
We propose a reinforcement learning method that can efficiently learn the branching.
- Score: 78.52895577861327
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Combinatorial optimization problems require an exhaustive search to find the
optimal solution. A convenient approach to solving combinatorial optimization
tasks in the form of Mixed Integer Linear Programs is Branch-and-Bound.
Branch-and-Bound solver splits a task into two parts dividing the domain of an
integer variable, then it solves them recursively, producing a tree of nested
sub-tasks. The efficiency of the solver depends on the branchning heuristic
used to select a variable for splitting. In the present work, we propose a
reinforcement learning method that can efficiently learn the branching
heuristic. We view the variable selection task as a tree Markov Decision
Process, prove that the Bellman operator adapted for the tree Markov Decision
Process is contracting in mean, and propose a modified learning objective for
the reinforcement learning agent. Our agent requires less training data and
produces smaller trees compared to previous reinforcement learning methods.
Related papers
- Terminating Differentiable Tree Experts [77.2443883991608]
We propose a neuro-symbolic Differentiable Tree Machine that learns tree operations using a combination of transformers and Representation Products.
We first remove a series of different transformer layers that are used in every step by introducing a mixture of experts.
We additionally propose a new termination algorithm to provide the model the power to choose how many steps to make automatically.
arXiv Detail & Related papers (2024-07-02T08:45:38Z) - Learning accurate and interpretable decision trees [27.203303726977616]
We develop approaches to design decision tree learning algorithms given repeated access to data from the same domain.
We study the sample complexity of tuning prior parameters in Bayesian decision tree learning, and extend our results to decision tree regression.
We also study the interpretability of the learned decision trees and introduce a data-driven approach for optimizing the explainability versus accuracy trade-off using decision trees.
arXiv Detail & Related papers (2024-05-24T20:10:10Z) - Tree Prompting: Efficient Task Adaptation without Fine-Tuning [112.71020326388029]
Tree Prompting builds a decision tree of prompts, linking multiple LM calls together to solve a task.
Experiments on classification datasets show that Tree Prompting improves accuracy over competing methods and is competitive with fine-tuning.
arXiv Detail & Related papers (2023-10-21T15:18:22Z) - Interpretable Decision Tree Search as a Markov Decision Process [8.530182510074983]
Finding an optimal decision tree for a supervised learning task is a challenging problem to solve at scale.
It was recently proposed to frame the problem as a Markov Decision Problem (MDP) and use deep reinforcement learning to tackle scaling.
We propose instead to scale the resolution of such MDPs using an information-theoretic tests generating function that generates for every state.
arXiv Detail & Related papers (2023-09-22T08:18:08Z) - Mixture of Decision Trees for Interpretable Machine Learning [4.180840853105103]
This work introduces a novel interpretable machine learning method called Mixture of Decision Trees (MoDT)
MoDT can be considered as a method that improves performance while maintaining interpretability by making each of its decisions understandable and traceable to humans.
arXiv Detail & Related papers (2022-11-26T17:09:51Z) - Quant-BnB: A Scalable Branch-and-Bound Method for Optimal Decision Trees
with Continuous Features [5.663538370244174]
We present a new discrete optimization method based on branch-and-bound (BnB) to obtain optimal decision trees.
Our proposed algorithm Quant-BnB shows significant speedups compared to existing approaches for shallow optimal trees on various real datasets.
arXiv Detail & Related papers (2022-06-23T17:19:29Z) - Reinforcement Learning for Branch-and-Bound Optimisation using
Retrospective Trajectories [72.15369769265398]
Machine learning has emerged as a promising paradigm for branching.
We propose retro branching; a simple yet effective approach to RL for branching.
We outperform the current state-of-the-art RL branching algorithm by 3-5x and come within 20% of the best IL method's performance on MILPs with 500 constraints and 1000 variables.
arXiv Detail & Related papers (2022-05-28T06:08:07Z) - Learning Pseudo-Backdoors for Mixed Integer Programs [48.36587539004464]
We propose a machine learning approach for solving Mixed Programs (MIP) by learning to prioritize a set of decision variables, which we call pseudo-backdoors, for branching that results in faster solution times.
Our approach takes inspiration from the concept of strong backdoors, which corresponds to a small set of variables such that only branching on these variables yields an optimal integral solution and a proof of optimality.
A key advantage of pseudo-backdoors over strong backdoors is that they are much amenable to data-driven identification or prediction.
arXiv Detail & Related papers (2021-06-09T13:59:53Z) - Learning Binary Decision Trees by Argmin Differentiation [34.9154848754842]
We learn binary decision trees that partition data for some downstream task.
We do so by relaxing a mixed-integer program for the discrete parameters.
We derive customized algorithms to efficiently compute the forward and backward passes.
arXiv Detail & Related papers (2020-10-09T15:11:28Z) - Generalized and Scalable Optimal Sparse Decision Trees [56.35541305670828]
We present techniques that produce optimal decision trees over a variety of objectives.
We also introduce a scalable algorithm that produces provably optimal results in the presence of continuous variables.
arXiv Detail & Related papers (2020-06-15T19:00:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.