Parameterizing Branch-and-Bound Search Trees to Learn Branching Policies
- URL: http://arxiv.org/abs/2002.05120v4
- Date: Wed, 2 Jun 2021 20:11:03 GMT
- Title: Parameterizing Branch-and-Bound Search Trees to Learn Branching Policies
- Authors: Giulia Zarpellon, Jason Jo, Andrea Lodi and Yoshua Bengio
- Abstract summary: Branch and Bound (B&B) is the exact tree search method typically used to solve Mixed-Integer Linear Programming problems (MILPs)
We propose a novel imitation learning framework, and introduce new input features and architectures to represent branching.
- Score: 76.83991682238666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Branch and Bound (B&B) is the exact tree search method typically used to
solve Mixed-Integer Linear Programming problems (MILPs). Learning branching
policies for MILP has become an active research area, with most works proposing
to imitate the strong branching rule and specialize it to distinct classes of
problems. We aim instead at learning a policy that generalizes across
heterogeneous MILPs: our main hypothesis is that parameterizing the state of
the B&B search tree can aid this type of generalization. We propose a novel
imitation learning framework, and introduce new input features and
architectures to represent branching. Experiments on MILP benchmark instances
clearly show the advantages of incorporating an explicit parameterization of
the state of the search tree to modulate the branching decisions, in terms of
both higher accuracy and smaller B&B trees. The resulting policies
significantly outperform the current state-of-the-art method for "learning to
branch" by effectively allowing generalization to generic unseen instances.
Related papers
- BPP-Search: Enhancing Tree of Thought Reasoning for Mathematical Modeling Problem Solving [11.596474985695679]
We release the StructuredOR dataset, annotated with comprehensive labels that capture the complete mathematical modeling process.
We propose BPP-Search, a algorithm that integrates reinforcement learning into a tree-of-thought structure.
BPP-Search significantly outperforms state-of-the-art methods, including Chain-of-Thought, Self-Consistency, and Tree-of-Thought.
arXiv Detail & Related papers (2024-11-26T13:05:53Z) - Technical Report: Enhancing LLM Reasoning with Reward-guided Tree Search [95.06503095273395]
o1-like reasoning approach is challenging, and researchers have been making various attempts to advance this open area of research.
We present a preliminary exploration into enhancing the reasoning abilities of LLMs through reward-guided tree search algorithms.
arXiv Detail & Related papers (2024-11-18T16:15:17Z) - An efficient solution to Hidden Markov Models on trees with coupled branches [0.0]
We extend the framework of Hidden Models (HMMs) on trees to address scenarios where the tree-like structure of the data includes coupled branches.
We develop a programming algorithm that efficiently solves the likelihood, decoding, and parameter learning problems for tree-based HMMs with coupled branches.
arXiv Detail & Related papers (2024-06-03T18:00:00Z) - Learning a Decision Tree Algorithm with Transformers [75.96920867382859]
We introduce MetaTree, a transformer-based model trained via meta-learning to directly produce strong decision trees.
We fit both greedy decision trees and globally optimized decision trees on a large number of datasets, and train MetaTree to produce only the trees that achieve strong generalization performance.
arXiv Detail & Related papers (2024-02-06T07:40:53Z) - TreeDQN: Learning to minimize Branch-and-Bound tree [78.52895577861327]
Branch-and-Bound is a convenient approach to solving optimization tasks in the form of Mixed Linear Programs.
The efficiency of the solver depends on the branchning used to select a variable for splitting.
We propose a reinforcement learning method that can efficiently learn the branching.
arXiv Detail & Related papers (2023-06-09T14:01:26Z) - Branch Ranking for Efficient Mixed-Integer Programming via Offline
Ranking-based Policy Learning [45.1011106869493]
We formulate learning to branch as an offline reinforcement learning (RL) problem.
We train a branching model named Branch Ranking via offline policy learning.
Experiments on synthetic MIP benchmarks and real-world tasks demonstrate that Branch Rankink is more efficient and robust.
arXiv Detail & Related papers (2022-07-26T15:34:10Z) - Reinforcement Learning for Branch-and-Bound Optimisation using
Retrospective Trajectories [72.15369769265398]
Machine learning has emerged as a promising paradigm for branching.
We propose retro branching; a simple yet effective approach to RL for branching.
We outperform the current state-of-the-art RL branching algorithm by 3-5x and come within 20% of the best IL method's performance on MILPs with 500 constraints and 1000 variables.
arXiv Detail & Related papers (2022-05-28T06:08:07Z) - Learning to branch with Tree MDPs [6.754135838894833]
We propose to learn branching rules from scratch via Reinforcement Learning (RL)
We propose tree Markov Decision Processes, or tree MDPs, a generalization of temporal MDPs that provides a more suitable framework for learning to branch.
We demonstrate through computational experiments that tree MDPs improve the learning convergence, and offer a promising framework for tackling the learning-to-branch problem in MILPs.
arXiv Detail & Related papers (2022-05-23T07:57:32Z) - MurTree: Optimal Classification Trees via Dynamic Programming and Search [61.817059565926336]
We present a novel algorithm for learning optimal classification trees based on dynamic programming and search.
Our approach uses only a fraction of the time required by the state-of-the-art and can handle datasets with tens of thousands of instances.
arXiv Detail & Related papers (2020-07-24T17:06:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.