Graph Neural Network Guided Local Search for the Traveling Salesperson
Problem
- URL: http://arxiv.org/abs/2110.05291v2
- Date: Tue, 12 Oct 2021 09:47:13 GMT
- Title: Graph Neural Network Guided Local Search for the Traveling Salesperson
Problem
- Authors: Benjamin Hudson and Qingbiao Li and Matthew Malencia and Amanda Prorok
- Abstract summary: We present a hybrid data-driven approach for solving the Traveling Salesperson Problem (TSP) based on Graph Neural Networks (GNNs) and Guided Local Search (GLS)
Our approach finds solutions with an average optimality gap of 2.5%, a 10x improvement over the next best learning-based benchmark.
- Score: 5.906031288935515
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Solutions to the Traveling Salesperson Problem (TSP) have practical
applications to processes in transportation, logistics, and automation, yet
must be computed with minimal delay to satisfy the real-time nature of the
underlying tasks. However, solving large TSP instances quickly without
sacrificing solution quality remains challenging for current approximate
algorithms. To close this gap, we present a hybrid data-driven approach for
solving the TSP based on Graph Neural Networks (GNNs) and Guided Local Search
(GLS). Our model predicts the regret of including each edge of the problem
graph in the solution; GLS uses these predictions in conjunction with the
original problem graph to find solutions. Our experiments demonstrate that this
approach converges to optimal solutions at a faster rate than state-of-the-art
learning-based approaches and non-learning GLS algorithms for the TSP, notably
finding optimal solutions to 96% of the 50-node problem set, 7% more than the
next best benchmark, and to 20% of the 100-node problem set, 4.5x more than the
next best benchmark. When generalizing from 20-node problems to the 100-node
problem set, our approach finds solutions with an average optimality gap of
2.5%, a 10x improvement over the next best learning-based benchmark.
Related papers
- BalMCTS: Balancing Objective Function and Search Nodes in MCTS for
Constraint Optimization Problems [7.196057722218442]
Constraint Problems Optimization (COP) pose intricate challenges in problems usually addressed through Branch and Bound (B&B) methods.
We propose a novel neural network algorithm based on a depth-first search algorithm for solving COP.
Our method identifies feasible solutions with a gap of less than 17.63% within the initial 5 feasible solutions.
arXiv Detail & Related papers (2023-12-26T03:09:08Z) - DIFUSCO: Graph-based Diffusion Solvers for Combinatorial Optimization [51.517956081644186]
We introduce a new graph-based diffusion framework, namely DIFUSCO.
Our framework casts NPC problems as discrete 0, 1-vector optimization problems.
For the MIS problem, DIFUSCO outperforms the previous state-of-the-art neural solver on the challenging SATLIB benchmark.
arXiv Detail & Related papers (2023-02-16T11:13:36Z) - Learning to repeatedly solve routing problems [5.08128537391027]
We present a learned for the reoptimization of a problem after a minor change in its data.
Given the edges of an original solution, the goal is to predict and fix the ones that have a high chance of remaining in an optimal solution.
This partial prediction of the solution reduces the complexity of the problem and speeds up its resolution.
arXiv Detail & Related papers (2022-12-15T19:33:54Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Neural Improvement Heuristics for Graph Combinatorial Optimization
Problems [49.85111302670361]
We introduce a novel Neural Improvement (NI) model capable of handling graph-based problems where information is encoded in the nodes, edges, or both.
The presented model serves as a fundamental component for hill-climbing-based algorithms that guide the selection of neighborhood operations for each.
arXiv Detail & Related papers (2022-06-01T10:35:29Z) - Learning to Solve Combinatorial Graph Partitioning Problems via
Efficient Exploration [72.15369769265398]
Experimentally, ECORD achieves a new SOTA for RL algorithms on the Maximum Cut problem.
Compared to the nearest competitor, ECORD reduces the optimality gap by up to 73%.
arXiv Detail & Related papers (2022-05-27T17:13:10Z) - Solve routing problems with a residual edge-graph attention neural
network [10.42872026679616]
In this paper, an end-to-end deep reinforcement learning framework is proposed to solve NP-hard optimization problems.
The proposed framework is aiming to improve the models in literacy in terms of the neural network model and the training algorithm.
Specifically, the average optimality gap is reduced from 4.53% (reported best citeR22) to 3.67% for TSP with 100 nodes and from 7.34% (reported best citeR22) to 6.68% for CVRP with 100 nodes when using the greedy decoding strategy.
arXiv Detail & Related papers (2021-05-06T14:47:47Z) - Distributed Multi-agent Meta Learning for Trajectory Design in Wireless
Drone Networks [151.27147513363502]
This paper studies the problem of the trajectory design for a group of energyconstrained drones operating in dynamic wireless network environments.
A value based reinforcement learning (VDRL) solution and a metatraining mechanism is proposed.
arXiv Detail & Related papers (2020-12-06T01:30:12Z) - A Generative Graph Method to Solve the Travelling Salesman Problem [1.552282932199974]
We propose to use the novel Graph Learning Network (GLN), a generative approach, to approximately solve the Travelling Salesman Problem (TSP)
GLN model learns directly the pattern of TSP instances as training dataset, encodes the graph properties, and merge the different node embeddings to output node-to-node an optimal tour.
arXiv Detail & Related papers (2020-07-09T17:23:55Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.