Survey on Neural Routing Solvers
- URL: http://arxiv.org/abs/2602.21761v1
- Date: Wed, 25 Feb 2026 10:24:43 GMT
- Title: Survey on Neural Routing Solvers
- Authors: Yunpeng Ba, Xi Lin, Changliang Zhou, Ruihao Zheng, Zhenkun Wang, Xinyan Liang, Zhichao Lu, Jianyong Sun, Yuhua Qian, Qingfu Zhang,
- Abstract summary: Neural routing (NRSs) that leverage deep learning to tackle vehicle routing problems have demonstrated notable potential.<n>By learning implicit rules from data, NRSs replace the handcrafted counterparts in classic applications.
- Score: 52.835314330473786
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural routing solvers (NRSs) that leverage deep learning to tackle vehicle routing problems have demonstrated notable potential for practical applications. By learning implicit heuristic rules from data, NRSs replace the handcrafted counterparts in classic heuristic frameworks, thereby reducing reliance on costly manual design and trial-and-error adjustments. This survey makes two main contributions: (1) The heuristic nature of NRSs is highlighted, and existing NRSs are reviewed from the perspective of heuristics. A hierarchical taxonomy based on heuristic principles is further introduced. (2) A generalization-focused evaluation pipeline is proposed to address limitations of the conventional pipeline. Comparative benchmarking of representative NRSs across both pipelines uncovers a series of previously unreported gaps in current research.
Related papers
- Toward Robust Non-Transferable Learning: A Survey and Benchmark [51.52542476904985]
Non-transferable learning (NTL) is a task aimed at reshaping the generalization abilities of deep learning models.<n>We present the first comprehensive survey on NTL and introducing NTLBench, the first benchmark to evaluate NTL performance and robustness.<n>We discuss the practical applications of NTL, along with its future directions and associated challenges.
arXiv Detail & Related papers (2025-02-19T10:12:19Z) - Graph and Sequential Neural Networks in Session-based Recommendation: A Survey [41.59094128068782]
Session-based recommendation (SR) specializes in users' short-term preference capture and aims to provide a more dynamic and timely recommendation.<n>First, we clarify the definitions of various SR tasks and introduce the characteristics of session-based recommendation.<n>Second, we summarize the existing methods in two categories: sequential neural network based methods and graph neural network (GNN) based methods.
arXiv Detail & Related papers (2024-08-27T08:08:05Z) - A Neuro-Symbolic Benchmark Suite for Concept Quality and Reasoning Shortcuts [20.860617965394848]
We introduce rsbench, a benchmark suite designed to systematically evaluate the impact of reasoning shortcuts on models.
Using rsbench, we highlight that obtaining high quality concepts in both purely neural and neuro-symbolic models is a far-from-solved problem.
arXiv Detail & Related papers (2024-06-14T18:52:34Z) - Domain Generalization Guided by Gradient Signal to Noise Ratio of
Parameters [69.24377241408851]
Overfitting to the source domain is a common issue in gradient-based training of deep neural networks.
We propose to base the selection on gradient-signal-to-noise ratio (GSNR) of network's parameters.
arXiv Detail & Related papers (2023-10-11T10:21:34Z) - Generalization Guarantees of Gradient Descent for Multi-Layer Neural
Networks [55.86300309474023]
We conduct a comprehensive stability and generalization analysis of gradient descent (GD) for multi-layer NNs.
We derive the excess risk rate of $O(1/sqrtn)$ for GD algorithms in both two-layer and three-layer NNs.
arXiv Detail & Related papers (2023-05-26T12:51:38Z) - Backward Reachability Analysis of Neural Feedback Loops: Techniques for
Linear and Nonlinear Systems [59.57462129637796]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present frameworks for calculating BP over-approximations for both linear and nonlinear systems with control policies represented by feedforward NNs.
arXiv Detail & Related papers (2022-09-28T13:17:28Z) - Backward Reachability Analysis for Neural Feedback Loops [40.989393438716476]
This paper presents a backward reachability approach for safety verification of closed-loop systems with neural networks (NNs)
The presence of NNs in the feedback loop presents a unique set of problems due to the nonlinearities in their activation functions and because NN models are generally not invertible.
We present an algorithm to iteratively find BP set estimates over a given time horizon and demonstrate the ability to reduce conservativeness by up to 88% with low additional computational cost.
arXiv Detail & Related papers (2022-04-14T01:13:14Z) - Explicitising The Implicit Intrepretability of Deep Neural Networks Via
Duality [5.672223170618133]
Recent work by Lakshminarayanan and Singh provided a dual view for fully connected deep neural networks (DNNs) with rectified linear units (ReLU)
arXiv Detail & Related papers (2022-03-01T03:08:21Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.