NeuroLKH: Combining Deep Learning Model with Lin-Kernighan-Helsgaun
Heuristic for Solving the Traveling Salesman Problem
- URL: http://arxiv.org/abs/2110.07983v1
- Date: Fri, 15 Oct 2021 10:14:27 GMT
- Title: NeuroLKH: Combining Deep Learning Model with Lin-Kernighan-Helsgaun
Heuristic for Solving the Traveling Salesman Problem
- Authors: Liang Xin, Wen Song, Zhiguang Cao, Jie Zhang
- Abstract summary: We present NeuroLKH, a novel algorithm that combines deep learning with the strong traditional Lin-Kernighan-Helsgaun (LKH)
Specifically, we train a Sparse Graph Network (SGN) with supervised learning for edge scores and unsupervised learning for node penalties.
By training one model on a wide range of problem sizes, NeuroLKH significantly outperforms LKH and generalizes well to much larger sizes.
- Score: 14.605192361813454
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present NeuroLKH, a novel algorithm that combines deep learning with the
strong traditional heuristic Lin-Kernighan-Helsgaun (LKH) for solving Traveling
Salesman Problem. Specifically, we train a Sparse Graph Network (SGN) with
supervised learning for edge scores and unsupervised learning for node
penalties, both of which are critical for improving the performance of LKH.
Based on the output of SGN, NeuroLKH creates the edge candidate set and
transforms edge distances to guide the searching process of LKH. Extensive
experiments firmly demonstrate that, by training one model on a wide range of
problem sizes, NeuroLKH significantly outperforms LKH and generalizes well to
much larger sizes. Also, we show that NeuroLKH can be applied to other routing
problems such as Capacitated Vehicle Routing Problem (CVRP), Pickup and
Delivery Problem (PDP), and CVRP with Time Windows (CVRPTW).
Related papers
- Improving Generalization of Neural Combinatorial Optimization for Vehicle Routing Problems via Test-Time Projection Learning [3.0711362702464684]
We introduce a novel learning framework driven by Large Language Models (LLMs)<n>Unlike prevailing techniques that necessitate joint training with the neural network, our approach operates exclusively during the inference phase.<n>Our method enables a backbone model (trained on 100-node instances) to achieve superior performance on large-scale Traveling Salesman Problem (TSP) and Capacitated Vehicle Routing Problem (CVRP) of up to 100K nodes from diverse distributions.
arXiv Detail & Related papers (2025-06-03T03:15:22Z) - L2R: Learning to Reduce Search Space for Generalizable Neural Routing Solver [12.396576646539252]
Constructive neural optimization (NCO) has attracted growing research attention due to its ability to solve complex routing problems without relying on handcrafted rules.
Existing NCO methods face challenges in generalizing to large-scale problems due to high computational complexity and inefficient capture of structural patterns.
We propose a novel learning-based search space reduction method that adaptively selects a small set of promising candidate nodes at each step of the constructive NCO process.
arXiv Detail & Related papers (2025-03-05T03:25:09Z) - How Feature Learning Can Improve Neural Scaling Laws [86.9540615081759]
We develop a solvable model of neural scaling laws beyond the kernel limit.
We show how performance scales with model size, training time, and the total amount of available data.
arXiv Detail & Related papers (2024-09-26T14:05:32Z) - Learning to Solve Combinatorial Optimization under Positive Linear Constraints via Non-Autoregressive Neural Networks [103.78912399195005]
Combinatorial optimization (CO) is the fundamental problem at the intersection of computer science, applied mathematics, etc.
In this paper, we design a family of non-autoregressive neural networks to solve CO problems under positive linear constraints.
We validate the effectiveness of this framework in solving representative CO problems including facility location, max-set covering, and traveling salesman problem.
arXiv Detail & Related papers (2024-09-06T14:58:31Z) - Neural Combinatorial Optimization with Heavy Decoder: Toward Large Scale
Generalization [15.189182646851865]
We propose a novel Light and Heavy Decoder (LEHD) model with a strong generalization ability to address this critical issue.
We develop a data-efficient training scheme and a flexible solution mechanism for the proposed LEHD model.
Our results confirm our proposed LEHD model can significantly improve the state-of-the-art performance for constructive NCO.
arXiv Detail & Related papers (2023-10-12T02:18:50Z) - Fixing the NTK: From Neural Network Linearizations to Exact Convex
Programs [63.768739279562105]
We show that for a particular choice of mask weights that do not depend on the learning targets, this kernel is equivalent to the NTK of the gated ReLU network on the training data.
A consequence of this lack of dependence on the targets is that the NTK cannot perform better than the optimal MKL kernel on the training set.
arXiv Detail & Related papers (2023-09-26T17:42:52Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Learning Lipschitz Functions by GD-trained Shallow Overparameterized
ReLU Neural Networks [12.018422134251384]
We show that neural networks trained to nearly zero training error are inconsistent in this class.
We show that whenever some early stopping rule is guaranteed to give an optimal rate (of excess risk) on the Hilbert space of the kernel induced by the ReLU activation function, the same rule can be used to achieve minimax optimal rate.
arXiv Detail & Related papers (2022-12-28T14:56:27Z) - Learning to Solve Combinatorial Graph Partitioning Problems via
Efficient Exploration [72.15369769265398]
Experimentally, ECORD achieves a new SOTA for RL algorithms on the Maximum Cut problem.
Compared to the nearest competitor, ECORD reduces the optimality gap by up to 73%.
arXiv Detail & Related papers (2022-05-27T17:13:10Z) - Online Limited Memory Neural-Linear Bandits with Likelihood Matching [53.18698496031658]
We study neural-linear bandits for solving problems where both exploration and representation learning play an important role.
We propose a likelihood matching algorithm that is resilient to catastrophic forgetting and is completely online.
arXiv Detail & Related papers (2021-02-07T14:19:07Z) - Combining Reinforcement Learning with Lin-Kernighan-Helsgaun Algorithm
for the Traveling Salesman Problem [12.851149098610096]
We propose a variable strategy reinforced approach, denoted as VSR-LKH.
It combines three reinforcement learning methods (Q-learning, Sarsa and Monte Carlo) with the well-known TSP algorithm, called Lin-Kernighan-Helsgaun (LKH)
VSR-LKH replaces the inflexible operation in LKH, and lets the program learn to make choice at each search step by reinforcement learning.
arXiv Detail & Related papers (2020-12-08T14:58:36Z) - Reinforcement Learning via Gaussian Processes with Neural Network Dual
Kernels [0.0]
We show that neural network dual kernels can be efficiently applied to regression and reinforcement learning problems.
We demonstrate, using the well-understood mountain-car problem, that GPs empowered with dual kernels perform at least as well as those using the conventional radial basis function kernel.
arXiv Detail & Related papers (2020-04-10T18:36:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.