Exact Combinatorial Optimization with Temporo-Attentional Graph Neural
Networks
- URL: http://arxiv.org/abs/2311.13843v1
- Date: Thu, 23 Nov 2023 08:07:15 GMT
- Title: Exact Combinatorial Optimization with Temporo-Attentional Graph Neural
Networks
- Authors: Mehdi Seyfi, Amin Banitalebi-Dehkordi, Zirui Zhou, and Yong Zhang
- Abstract summary: We investigate two essential aspects of machine learning algorithms for optimization: temporal characteristics and attention.
We argue that for the task of variable selection in the branch-and-bound (B&B) algorithm, incorporating the temporal information as well as the bipartite graph attention improves the solver's performance.
- Score: 17.128882942475
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combinatorial optimization finds an optimal solution within a discrete set of
variables and constraints. The field has seen tremendous progress both in
research and industry. With the success of deep learning in the past decade, a
recent trend in combinatorial optimization has been to improve state-of-the-art
combinatorial optimization solvers by replacing key heuristic components with
machine learning (ML) models. In this paper, we investigate two essential
aspects of machine learning algorithms for combinatorial optimization: temporal
characteristics and attention. We argue that for the task of variable selection
in the branch-and-bound (B&B) algorithm, incorporating the temporal information
as well as the bipartite graph attention improves the solver's performance. We
support our claims with intuitions and numerical results over several standard
datasets used in the literature and competitions. Code is available at:
https://developer.huaweicloud.com/develop/aigallery/notebook/detail?id=047c6cf2-8463-40d7-b92f-7b2ca998e935
Related papers
- Context-Aware Ensemble Learning for Time Series [11.716677452529114]
We introduce a new approach using a meta learner that effectively combines the base model predictions via using a superset of the features that is the union of the base models' feature vectors instead of the predictions themselves.
Our model does not use the predictions of the base models as inputs to a machine learning algorithm, but choose the best possible combination at each time step based on the state of the problem.
arXiv Detail & Related papers (2022-11-30T10:36:13Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Optimizer Amalgamation [124.33523126363728]
We are motivated to study a new problem named Amalgamation: how can we best combine a pool of "teacher" amalgamations into a single "student" that can have stronger problem-specific performance?
First, we define three differentiable mechanisms to amalgamate a pool of analyticals by gradient descent.
In order to reduce variance of the process, we also explore methods to stabilize the process by perturbing the target.
arXiv Detail & Related papers (2022-03-12T16:07:57Z) - The Machine Learning for Combinatorial Optimization Competition (ML4CO):
Results and Insights [59.93939636422896]
The ML4CO aims at improving state-of-the-art optimization solvers by replacing key components.
The competition featured three challenging tasks: finding the best feasible solution, producing the tightest optimality certificate, and giving an appropriate routing configuration.
arXiv Detail & Related papers (2022-03-04T17:06:00Z) - Yordle: An Efficient Imitation Learning for Branch and Bound [1.6758573326215689]
This work presents our solution and insights gained by team qqy in the 2021 NeurIPS Machine Learning for Combinatorial Optimization (ML4CO) competition.
Our solution is a highly efficient imitation learning framework for performance improvement of Branch and Bound (B&B), named Yordle.
In our experiments, Yordle greatly outperforms the baseline algorithm adopted by the competition while requiring significantly less time and amounts of data to train the decision model.
arXiv Detail & Related papers (2022-02-02T14:46:30Z) - ML4CO: Is GCNN All You Need? Graph Convolutional Neural Networks Produce
Strong Baselines For Combinatorial Optimization Problems, If Tuned and
Trained Properly, on Appropriate Data [8.09193285529236]
This paper summarizes the solution and lessons learned by the Huawei EI-OROAS team in the 2021 NeurIPS Machine Learning for Combinatorial Optimization (ML4CO) competition.
The submission of our team achieved the second place in the final ranking, with a very close distance to the first spot.
We argue that a simple Graph Convolutional Neural Network (GCNNs) can achieve state-of-the-art results if trained and tuned properly.
arXiv Detail & Related papers (2021-12-22T22:40:13Z) - A Bi-Level Framework for Learning to Solve Combinatorial Optimization on
Graphs [91.07247251502564]
We propose a hybrid approach to combine the best of the two worlds, in which a bi-level framework is developed with an upper-level learning method to optimize the graph.
Such a bi-level approach simplifies the learning on the original hard CO and can effectively mitigate the demand for model capacity.
arXiv Detail & Related papers (2021-06-09T09:18:18Z) - Combinatorial optimization and reasoning with graph neural networks [7.8107109904672365]
Combinatorial optimization is a well-established area in operations research and computer science.
Recent years have seen a surge of interest in using machine learning, especially graph neural networks (GNNs) as a key building block for tasks.
arXiv Detail & Related papers (2021-02-18T18:47:20Z) - Stepwise Model Selection for Sequence Prediction via Deep Kernel
Learning [100.83444258562263]
We propose a novel Bayesian optimization (BO) algorithm to tackle the challenge of model selection in this setting.
In order to solve the resulting multiple black-box function optimization problem jointly and efficiently, we exploit potential correlations among black-box functions.
We are the first to formulate the problem of stepwise model selection (SMS) for sequence prediction, and to design and demonstrate an efficient joint-learning algorithm for this purpose.
arXiv Detail & Related papers (2020-01-12T09:42:19Z) - Learning fine-grained search space pruning and heuristics for
combinatorial optimization [5.72274610208488]
We propose a framework for leveraging machine learning techniques to scale-up exact optimization algorithms.
Our framework learns the relatively simpler task of pruning the elements in order to reduce the size of the problem instances.
We show that our framework can prune a large fraction of the input graph and still detect almost all of the maximum cliques.
arXiv Detail & Related papers (2020-01-05T13:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.