Neural Improvement Heuristics for Graph Combinatorial Optimization
Problems
- URL: http://arxiv.org/abs/2206.00383v3
- Date: Sat, 7 Oct 2023 12:33:05 GMT
- Title: Neural Improvement Heuristics for Graph Combinatorial Optimization
Problems
- Authors: Andoni I. Garmendia, Josu Ceberio, Alexander Mendiburu
- Abstract summary: We introduce a novel Neural Improvement (NI) model capable of handling graph-based problems where information is encoded in the nodes, edges, or both.
The presented model serves as a fundamental component for hill-climbing-based algorithms that guide the selection of neighborhood operations for each.
- Score: 49.85111302670361
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recent advances in graph neural network architectures and increased
computation power have revolutionized the field of combinatorial optimization
(CO). Among the proposed models for CO problems, Neural Improvement (NI) models
have been particularly successful. However, existing NI approaches are limited
in their applicability to problems where crucial information is encoded in the
edges, as they only consider node features and node-wise positional encodings.
To overcome this limitation, we introduce a novel NI model capable of handling
graph-based problems where information is encoded in the nodes, edges, or both.
The presented model serves as a fundamental component for hill-climbing-based
algorithms that guide the selection of neighborhood operations for each
iteration. Conducted experiments demonstrate that the proposed model can
recommend neighborhood operations that outperform conventional versions for the
Preference Ranking Problem with a performance in the 99th percentile. We also
extend the proposal to two well-known problems: the Traveling Salesman Problem
and the Graph Partitioning Problem, recommending operations in the 98th and
97th percentile, respectively.
Related papers
- Assessing and Enhancing Graph Neural Networks for Combinatorial Optimization: Novel Approaches and Application in Maximum Independent Set Problems [0.0]
Graph Neural Networks (GNNs) show promise for researchers in solving Combinatorial optimization (CO) problems.
This study investigates the effectiveness of GNNs in solving the maximum independent set (MIS) problem.
arXiv Detail & Related papers (2024-11-06T09:12:31Z) - Haste Makes Waste: A Simple Approach for Scaling Graph Neural Networks [37.41604955004456]
Graph neural networks (GNNs) have demonstrated remarkable success in graph representation learning.
Various sampling approaches have been proposed to scale GNNs to applications with large-scale graphs.
arXiv Detail & Related papers (2024-10-07T18:29:02Z) - Enhancing GNNs Performance on Combinatorial Optimization by Recurrent Feature Update [0.09986418756990156]
We introduce a novel algorithm, denoted hereafter as QRF-GNN, leveraging the power of GNNs to efficiently solve Combinatorial optimization (CO) problems.
It relies on unsupervised learning by minimizing the loss function derived from QUBO relaxation.
Results of experiments show that QRF-GNN drastically surpasses existing learning-based approaches and is comparable to the state-of-the-art conventionals.
arXiv Detail & Related papers (2024-07-23T13:34:35Z) - Decision-focused Graph Neural Networks for Combinatorial Optimization [62.34623670845006]
An emerging strategy to tackle optimization problems involves the adoption of graph neural networks (GNNs) as an alternative to traditional algorithms.
Despite the growing popularity of GNNs and traditional algorithm solvers in the realm of CO, there is limited research on their integrated use and the correlation between them within an end-to-end framework.
We introduce a decision-focused framework that utilizes GNNs to address CO problems with auxiliary support.
arXiv Detail & Related papers (2024-06-05T22:52:27Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Graph Neural Network based scheduling : Improved throughput under a
generalized interference model [3.911413922612859]
We propose a Graph Convolutional Neural Networks (GCN) based scheduling algorithm for adhoc networks.
A notable feature of this work is that the proposed method do not require labelled data set (NP-hard to compute) for training the neural network.
arXiv Detail & Related papers (2021-10-31T10:36:11Z) - Towards an Efficient and General Framework of Robust Training for Graph
Neural Networks [96.93500886136532]
Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks.
Despite GNNs' impressive performance, it has been observed that carefully crafted perturbations on graph structures lead them to make wrong predictions.
We propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs.
arXiv Detail & Related papers (2020-02-25T15:17:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.