ML4CO: Is GCNN All You Need? Graph Convolutional Neural Networks Produce
Strong Baselines For Combinatorial Optimization Problems, If Tuned and
Trained Properly, on Appropriate Data
- URL: http://arxiv.org/abs/2112.12251v1
- Date: Wed, 22 Dec 2021 22:40:13 GMT
- Title: ML4CO: Is GCNN All You Need? Graph Convolutional Neural Networks Produce
Strong Baselines For Combinatorial Optimization Problems, If Tuned and
Trained Properly, on Appropriate Data
- Authors: Amin Banitalebi-Dehkordi and Yong Zhang
- Abstract summary: This paper summarizes the solution and lessons learned by the Huawei EI-OROAS team in the 2021 NeurIPS Machine Learning for Combinatorial Optimization (ML4CO) competition.
The submission of our team achieved the second place in the final ranking, with a very close distance to the first spot.
We argue that a simple Graph Convolutional Neural Network (GCNNs) can achieve state-of-the-art results if trained and tuned properly.
- Score: 8.09193285529236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The 2021 NeurIPS Machine Learning for Combinatorial Optimization (ML4CO)
competition was designed with the goal of improving state-of-the-art
combinatorial optimization solvers by replacing key heuristic components with
machine learning models. The competition's main scientific question was the
following: is machine learning a viable option for improving traditional
combinatorial optimization solvers on specific problem distributions, when
historical data is available? This was motivated by the fact that in many
practical scenarios, the data changes only slightly between the repetitions of
a combinatorial optimization problem, and this is an area where machine
learning models are particularly powerful at. This paper summarizes the
solution and lessons learned by the Huawei EI-OROAS team in the dual task of
the competition. The submission of our team achieved the second place in the
final ranking, with a very close distance to the first spot. In addition, our
solution was ranked first consistently for several weekly leaderboard updates
before the final evaluation. We provide insights gained from a large number of
experiments, and argue that a simple Graph Convolutional Neural Network (GCNNs)
can achieve state-of-the-art results if trained and tuned properly.
Related papers
- Towards a General GNN Framework for Combinatorial Optimization [14.257210124854863]
We introduce a novel GNN architecture which leverages a complex filter bank and localized attention mechanisms designed to solve CO problems on graphs.
We show how our method differentiates itself from prior GNN-based CO solvers and how it can be effectively applied to the maximum clique, minimum dominating set, and maximum cut problems.
arXiv Detail & Related papers (2024-05-31T00:02:07Z) - Exact Combinatorial Optimization with Temporo-Attentional Graph Neural
Networks [17.128882942475]
We investigate two essential aspects of machine learning algorithms for optimization: temporal characteristics and attention.
We argue that for the task of variable selection in the branch-and-bound (B&B) algorithm, incorporating the temporal information as well as the bipartite graph attention improves the solver's performance.
arXiv Detail & Related papers (2023-11-23T08:07:15Z) - Unsupervised Learning for Combinatorial Optimization Needs Meta-Learning [14.86600327306136]
A general framework of unsupervised learning for optimization (CO) is to train a neural network (NN) whose output gives a problem solution by directly optimizing the CO objective.
We propose a new objective of unsupervised learning for CO where the goal of learning is to search for good initialization for future problem instances rather than give direct solutions.
We observe that even just the initial solution given by our model before fine-tuning can significantly outperform the baselines under various evaluation settings.
arXiv Detail & Related papers (2023-01-08T22:14:59Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - The Machine Learning for Combinatorial Optimization Competition (ML4CO):
Results and Insights [59.93939636422896]
The ML4CO aims at improving state-of-the-art optimization solvers by replacing key components.
The competition featured three challenging tasks: finding the best feasible solution, producing the tightest optimality certificate, and giving an appropriate routing configuration.
arXiv Detail & Related papers (2022-03-04T17:06:00Z) - Yordle: An Efficient Imitation Learning for Branch and Bound [1.6758573326215689]
This work presents our solution and insights gained by team qqy in the 2021 NeurIPS Machine Learning for Combinatorial Optimization (ML4CO) competition.
Our solution is a highly efficient imitation learning framework for performance improvement of Branch and Bound (B&B), named Yordle.
In our experiments, Yordle greatly outperforms the baseline algorithm adopted by the competition while requiring significantly less time and amounts of data to train the decision model.
arXiv Detail & Related papers (2022-02-02T14:46:30Z) - GNNRank: Learning Global Rankings from Pairwise Comparisons via Directed
Graph Neural Networks [68.61934077627085]
We introduce GNNRank, a modeling framework compatible with any GNN capable of learning digraph embeddings.
We show that our methods attain competitive and often superior performance compared with existing approaches.
arXiv Detail & Related papers (2022-02-01T04:19:50Z) - A Bi-Level Framework for Learning to Solve Combinatorial Optimization on
Graphs [91.07247251502564]
We propose a hybrid approach to combine the best of the two worlds, in which a bi-level framework is developed with an upper-level learning method to optimize the graph.
Such a bi-level approach simplifies the learning on the original hard CO and can effectively mitigate the demand for model capacity.
arXiv Detail & Related papers (2021-06-09T09:18:18Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.