Yordle: An Efficient Imitation Learning for Branch and Bound
- URL: http://arxiv.org/abs/2202.01896v1
- Date: Wed, 2 Feb 2022 14:46:30 GMT
- Title: Yordle: An Efficient Imitation Learning for Branch and Bound
- Authors: Qingyu Qu, Xijun Li and Yunfan Zhou
- Abstract summary: This work presents our solution and insights gained by team qqy in the 2021 NeurIPS Machine Learning for Combinatorial Optimization (ML4CO) competition.
Our solution is a highly efficient imitation learning framework for performance improvement of Branch and Bound (B&B), named Yordle.
In our experiments, Yordle greatly outperforms the baseline algorithm adopted by the competition while requiring significantly less time and amounts of data to train the decision model.
- Score: 1.6758573326215689
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Combinatorial optimization problems have aroused extensive research interests
due to its huge application potential. In practice, there are highly redundant
patterns and characteristics during solving the combinatorial optimization
problem, which can be captured by machine learning models. Thus, the 2021
NeurIPS Machine Learning for Combinatorial Optimization (ML4CO) competition is
proposed with the goal of improving state-of-the-art combinatorial optimization
solvers by replacing key heuristic components with machine learning techniques.
This work presents our solution and insights gained by team qqy in the dual
task of the competition. Our solution is a highly efficient imitation learning
framework for performance improvement of Branch and Bound (B&B), named Yordle.
It employs a hybrid sampling method and an efficient data selection method,
which not only accelerates the model training but also improves the decision
quality during branching variable selection. In our experiments, Yordle greatly
outperforms the baseline algorithm adopted by the competition while requiring
significantly less time and amounts of data to train the decision model.
Specifically, we use only 1/4 of the amount of data compared to that required
for the baseline algorithm, to achieve around 50% higher score than baseline
algorithm. The proposed framework Yordle won the championship of the student
leaderboard.
Related papers
- Exact Combinatorial Optimization with Temporo-Attentional Graph Neural
Networks [17.128882942475]
We investigate two essential aspects of machine learning algorithms for optimization: temporal characteristics and attention.
We argue that for the task of variable selection in the branch-and-bound (B&B) algorithm, incorporating the temporal information as well as the bipartite graph attention improves the solver's performance.
arXiv Detail & Related papers (2023-11-23T08:07:15Z) - VeLO: Training Versatile Learned Optimizers by Scaling Up [67.90237498659397]
We leverage the same scaling approach behind the success of deep learning to learn versatiles.
We train an ingest for deep learning which is itself a small neural network that ingests and outputs parameter updates.
We open source our learned, meta-training code, the associated train test data, and an extensive benchmark suite with baselines at velo-code.io.
arXiv Detail & Related papers (2022-11-17T18:39:07Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Optimizer Amalgamation [124.33523126363728]
We are motivated to study a new problem named Amalgamation: how can we best combine a pool of "teacher" amalgamations into a single "student" that can have stronger problem-specific performance?
First, we define three differentiable mechanisms to amalgamate a pool of analyticals by gradient descent.
In order to reduce variance of the process, we also explore methods to stabilize the process by perturbing the target.
arXiv Detail & Related papers (2022-03-12T16:07:57Z) - The Machine Learning for Combinatorial Optimization Competition (ML4CO):
Results and Insights [59.93939636422896]
The ML4CO aims at improving state-of-the-art optimization solvers by replacing key components.
The competition featured three challenging tasks: finding the best feasible solution, producing the tightest optimality certificate, and giving an appropriate routing configuration.
arXiv Detail & Related papers (2022-03-04T17:06:00Z) - An Improved Reinforcement Learning Algorithm for Learning to Branch [12.27934038849211]
Branch-and-bound (B&B) is a general and widely used method for optimization.
In this paper, we propose a novel reinforcement learning-based B&B algorithm.
We evaluate the performance of the proposed algorithm over three public research benchmarks.
arXiv Detail & Related papers (2022-01-17T04:50:11Z) - ML4CO: Is GCNN All You Need? Graph Convolutional Neural Networks Produce
Strong Baselines For Combinatorial Optimization Problems, If Tuned and
Trained Properly, on Appropriate Data [8.09193285529236]
This paper summarizes the solution and lessons learned by the Huawei EI-OROAS team in the 2021 NeurIPS Machine Learning for Combinatorial Optimization (ML4CO) competition.
The submission of our team achieved the second place in the final ranking, with a very close distance to the first spot.
We argue that a simple Graph Convolutional Neural Network (GCNNs) can achieve state-of-the-art results if trained and tuned properly.
arXiv Detail & Related papers (2021-12-22T22:40:13Z) - A Novel Surrogate-assisted Evolutionary Algorithm Applied to
Partition-based Ensemble Learning [0.0]
We propose a novel surrogate-assisted Algorithm for solving expensive optimization problems.
We integrate a surrogate model, which is used for fitness value estimation, into a state-of-the-art P3-like variant of the Evolutionary Gene- Evolutionary Optimal Mixing Algorithm.
We test the proposed algorithm on an ensemble learning problem.
arXiv Detail & Related papers (2021-04-16T11:51:18Z) - Learning to Optimize: A Primer and A Benchmark [94.29436694770953]
Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods.
This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization.
arXiv Detail & Related papers (2021-03-23T20:46:20Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.