Cascaded Large-Scale TSP Solving with Unified Neural Guidance: Bridging Local and Population-based Search
- URL: http://arxiv.org/abs/2501.14285v1
- Date: Fri, 24 Jan 2025 06:56:27 GMT
- Title: Cascaded Large-Scale TSP Solving with Unified Neural Guidance: Bridging Local and Population-based Search
- Authors: Shengcai Liu, Haoze Lv, Zhiyuan Wang, Ke Tang,
- Abstract summary: UNiCS is a novel unified neural-guided cascaded solver for the traveling salesman problem.
It consistently outperforms state-of-the-art methods, with its runtime advantage consistent across various budgets.
- Score: 23.528483405321975
- License:
- Abstract: The traveling salesman problem (TSP) is a fundamental NP-hard optimization problem. This work presents UNiCS, a novel unified neural-guided cascaded solver for solving large-scale TSP instances. UNiCS comprises a local search (LS) phase and a population-based search (PBS) phase, both guided by a learning component called unified neural guidance (UNG). Specifically, UNG guides solution generation across both phases and determines appropriate phase transition timing to effectively combine the complementary strengths of LS and PBS. While trained only on simple distributions with relatively small-scale TSP instances, UNiCS generalizes effectively to challenging TSP benchmarks containing much larger instances (10,000-71,009 nodes) with diverse node distributions entirely unseen during training. Experimental results on the large-scale TSP instances demonstrate that UNiCS consistently outperforms state-of-the-art methods, with its advantage remaining consistent across various runtime budgets.
Related papers
- Fast T2T: Optimization Consistency Speeds Up Diffusion-Based Training-to-Testing Solving for Combinatorial Optimization [83.65278205301576]
We propose to learn direct mappings from different noise levels to the optimal solution for a given instance, facilitating high-quality generation with minimal shots.
This is achieved through an optimization consistency training protocol, which minimizes the difference among samples.
Experiments on two popular tasks, the Traveling Salesman Problem (TSP) and Maximal Independent Set (MIS), demonstrate the superiority of Fast T2T regarding both solution quality and efficiency.
arXiv Detail & Related papers (2025-02-05T07:13:43Z) - Distributed Noncoherent Joint Transmission Based on Multi-Agent Reinforcement Learning for Dense Small Cell MISO Systems [8.146481327854545]
We consider a dense small cell (DSC) network where multi-antenna small cell base stations (SBSs) transmit data over a shared band.
arXiv Detail & Related papers (2024-08-22T02:11:14Z) - Connecting the Dots: Is Mode-Connectedness the Key to Feasible Sample-Based Inference in Bayesian Neural Networks? [9.332937821095653]
A major challenge in sample-based inference ( SBI) for Bayesian neural networks is the size and structure of the networks' parameter space.
We show that successful SBI is possible by embracing the characteristic relationship between weight and function space.
We present a deep ensemble approach as an effective solution with competitive performance and uncertainty quantification.
arXiv Detail & Related papers (2024-02-02T15:12:16Z) - Pointerformer: Deep Reinforced Multi-Pointer Transformer for the
Traveling Salesman Problem [67.32731657297377]
Traveling Salesman Problem (TSP) is a classic routing optimization problem originally arising in the domain of transportation and logistics.
Recently, Deep Reinforcement Learning has been increasingly employed to solve TSP due to its high inference efficiency.
We propose a novel end-to-end DRL approach, referred to as Pointerformer, based on multi-pointer Transformer.
arXiv Detail & Related papers (2023-04-19T03:48:32Z) - H-TSP: Hierarchically Solving the Large-Scale Travelling Salesman
Problem [11.310986634385742]
We propose an end-to-end learning framework based on hierarchical reinforcement learning, called H-TSP, for addressing the large-scale Travelling Salesman Problem (TSP)
We show that H-TSP can achieve comparable results as SOTA search-based approaches, and we reduce the time consumption up to two orders of magnitude (3.32s vs. 395.85s)
Although there are still gaps to SOTA results with respect to solution quality, we believe H-TSP will be useful for practical applications, particularly those that are time-sensitive e.g., on-call routing and ride
arXiv Detail & Related papers (2023-04-19T03:10:30Z) - Coupling Global Context and Local Contents for Weakly-Supervised
Semantic Segmentation [54.419401869108846]
We propose a single-stage WeaklySupervised Semantic (WSSS) model with only the image-level class label supervisions.
A flexible context aggregation module is proposed to capture the global object context in different granular spaces.
A semantically consistent feature fusion module is proposed in a bottom-up parameter-learnable fashion to aggregate the fine-grained local contents.
arXiv Detail & Related papers (2023-04-18T15:29:23Z) - Learning to Solve Travelling Salesman Problem with Hardness-adaptive
Curriculum [42.28343071442175]
We propose a hardness-adaptive generator to generate instances ten times harder than the existing methods.
Our proposed method achieves significant improvement over state-of-the-art models in terms of the optimality gap.
arXiv Detail & Related papers (2022-04-07T05:59:05Z) - Self-Ensembling GAN for Cross-Domain Semantic Segmentation [107.27377745720243]
This paper proposes a self-ensembling generative adversarial network (SE-GAN) exploiting cross-domain data for semantic segmentation.
In SE-GAN, a teacher network and a student network constitute a self-ensembling model for generating semantic segmentation maps, which together with a discriminator, forms a GAN.
Despite its simplicity, we find SE-GAN can significantly boost the performance of adversarial training and enhance the stability of the model.
arXiv Detail & Related papers (2021-12-15T09:50:25Z) - Learning to Optimise General TSP Instances [2.6782615615913348]
The Travelling Salesman Problem (TSP) is a classical optimisation problem.
Deep learning has been successfully extended to meta-learning, where previous solving efforts assist in learning how to optimise future instances.
This paper introduces a new learning-based approach to solve a variety of different and common TSP problems.
arXiv Detail & Related papers (2020-10-23T07:37:16Z) - SUNRISE: A Simple Unified Framework for Ensemble Learning in Deep
Reinforcement Learning [102.78958681141577]
We present SUNRISE, a simple unified ensemble method, which is compatible with various off-policy deep reinforcement learning algorithms.
SUNRISE integrates two key ingredients: (a) ensemble-based weighted Bellman backups, which re-weight target Q-values based on uncertainty estimates from a Q-ensemble, and (b) an inference method that selects actions using the highest upper-confidence bounds for efficient exploration.
arXiv Detail & Related papers (2020-07-09T17:08:44Z) - A Unified Theory of Decentralized SGD with Changing Topology and Local
Updates [70.9701218475002]
We introduce a unified convergence analysis of decentralized communication methods.
We derive universal convergence rates for several applications.
Our proofs rely on weak assumptions.
arXiv Detail & Related papers (2020-03-23T17:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.