Instance-Conditioned Adaptation for Large-scale Generalization of Neural Routing Solver
- URL: http://arxiv.org/abs/2405.01906v2
- Date: Mon, 19 May 2025 12:36:10 GMT
- Title: Instance-Conditioned Adaptation for Large-scale Generalization of Neural Routing Solver
- Authors: Changliang Zhou, Xi Lin, Zhenkun Wang, Xialiang Tong, Mingxuan Yuan, Qingfu Zhang,
- Abstract summary: We propose a novel Instance-Conditioned Adaptation Model (ICAM) for better large-scale generalization of neural routing solvers.<n>We show that our proposed method is capable of obtaining promising results with a very fast inference time.
- Score: 15.842155380912002
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The neural combinatorial optimization (NCO) method has shown great potential for solving routing problems of intelligent transportation systems without requiring expert knowledge. However, existing constructive NCO methods still struggle to solve large-scale instances, which significantly limits their application prospects. To address these crucial shortcomings, this work proposes a novel Instance-Conditioned Adaptation Model (ICAM) for better large-scale generalization of neural routing solvers. In particular, we design a simple yet efficient instance-conditioned adaptation function to significantly improve the generalization performance of existing NCO models with a small time and memory overhead. In addition, with a systematic investigation on the performance of information incorporation between different attention mechanisms, we further propose a powerful yet low-complexity instance-conditioned adaptation module to generate better solutions for instances across different scales. Extensive experimental results on both synthetic and benchmark instances show that our proposed method is capable of obtaining promising results with a very fast inference time in solving large-scale Traveling Salesman Problems (TSPs), Capacitated Vehicle Routing Problems (CVRPs), and Asymmetric Traveling Salesman Problems (ATSPs). Our code is available at https://github.com/CIAM-Group/ICAM.
Related papers
- Improving Generalization of Neural Combinatorial Optimization for Vehicle Routing Problems via Test-Time Projection Learning [3.0711362702464684]
We introduce a novel learning framework driven by Large Language Models (LLMs)<n>Unlike prevailing techniques that necessitate joint training with the neural network, our approach operates exclusively during the inference phase.<n>Our method enables a backbone model (trained on 100-node instances) to achieve superior performance on large-scale Traveling Salesman Problem (TSP) and Capacitated Vehicle Routing Problem (CVRP) of up to 100K nodes from diverse distributions.
arXiv Detail & Related papers (2025-06-03T03:15:22Z) - Preference Optimization for Combinatorial Optimization Problems [54.87466279363487]
Reinforcement Learning (RL) has emerged as a powerful tool for neural optimization, enabling models learns that solve complex problems without requiring expert knowledge.<n>Despite significant progress, existing RL approaches face challenges such as diminishing reward signals and inefficient exploration in vast action spaces.<n>We propose Preference Optimization, a novel method that transforms quantitative reward signals into qualitative preference signals via statistical comparison modeling.
arXiv Detail & Related papers (2025-05-13T16:47:00Z) - Learning to Reduce Search Space for Generalizable Neural Routing Solver [12.396576646539252]
Constructive neural optimization (NCO) has attracted growing research attention due to its ability to solve complex routing problems without relying on handcrafted rules.<n>Existing NCO methods face challenges in generalizing to large-scale problems due to high computational complexity and inefficient capture of structural patterns.<n>We propose a novel learning-based search space reduction method that adaptively selects a small set of promising candidate nodes at each step of the constructive NCO process.
arXiv Detail & Related papers (2025-03-05T03:25:09Z) - Unsupervised Training of Diffusion Models for Feasible Solution Generation in Neural Combinatorial Optimization [7.85458999849461]
We present IC/DC, an unsupervised CO framework that directly trains a diffusion model from scratch.<n>We train our model in a self-supervised way to minimize the cost of the solution while adhering to the problem-specific constraints.<n> IC/DC achieves state-of-the-art performance relative to existing NCO methods on the Parallel Machine Scheduling Problem (PMSP) and Asymmetric Traveling Salesman Problem (ATSP)
arXiv Detail & Related papers (2024-10-15T06:53:30Z) - Multiobjective Vehicle Routing Optimization with Time Windows: A Hybrid Approach Using Deep Reinforcement Learning and NSGA-II [52.083337333478674]
This paper proposes a weight-aware deep reinforcement learning (WADRL) approach designed to address the multiobjective vehicle routing problem with time windows (MOVRPTW)
The Non-dominated sorting genetic algorithm-II (NSGA-II) method is then employed to optimize the outcomes produced by the WADRL.
arXiv Detail & Related papers (2024-07-18T02:46:06Z) - Leader Reward for POMO-Based Neural Combinatorial Optimization [8.301694061287565]
We propose Leader Reward to enhance the model's ability to generate optimal solutions.
We demonstrate that Leader Reward greatly improves the quality of the optimal solutions generated by the model.
arXiv Detail & Related papers (2024-05-22T19:27:03Z) - Self-Improved Learning for Scalable Neural Combinatorial Optimization [15.842155380912002]
This work proposes a novel Self-Improved Learning (SIL) method for better scalability of neural optimization.
We develop an efficient self-improved mechanism that enables direct model training on large-scale problem instances without any labeled data.
In addition, we design a linear attention complexity mechanism for the computation model to efficiently handle large-scale problem instances with low overhead.
arXiv Detail & Related papers (2024-03-28T16:46:53Z) - An Efficient Learning-based Solver Comparable to Metaheuristics for the
Capacitated Arc Routing Problem [67.92544792239086]
We introduce an NN-based solver to significantly narrow the gap with advanced metaheuristics.
First, we propose direction-aware facilitating attention model (DaAM) to incorporate directionality into the embedding process.
Second, we design a supervised reinforcement learning scheme that involves supervised pre-training to establish a robust initial policy.
arXiv Detail & Related papers (2024-03-11T02:17:42Z) - Neural Combinatorial Optimization with Heavy Decoder: Toward Large Scale
Generalization [15.189182646851865]
We propose a novel Light and Heavy Decoder (LEHD) model with a strong generalization ability to address this critical issue.
We develop a data-efficient training scheme and a flexible solution mechanism for the proposed LEHD model.
Our results confirm our proposed LEHD model can significantly improve the state-of-the-art performance for constructive NCO.
arXiv Detail & Related papers (2023-10-12T02:18:50Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - On the Generalization of Neural Combinatorial Optimization Heuristics [0.7049738935364298]
We show that our proposed meta-learning approach significantly improves the generalization of two state-of-the-art models.
We formalize solving a CO problem over a given instance distribution as a separate learning task.
We investigate meta-learning techniques to learn a model on a variety of tasks, in order to optimize its capacity to adapt to new tasks.
arXiv Detail & Related papers (2022-06-01T22:39:35Z) - Sample-Efficient, Exploration-Based Policy Optimisation for Routing
Problems [2.6782615615913348]
This paper presents a new reinforcement learning approach that is based on entropy.
In addition, we design an off-policy-based reinforcement learning technique that maximises the expected return.
We show that our model can generalise to various route problems.
arXiv Detail & Related papers (2022-05-31T09:51:48Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Optimization-driven Machine Learning for Intelligent Reflecting Surfaces
Assisted Wireless Networks [82.33619654835348]
Intelligent surface (IRS) has been employed to reshape the wireless channels by controlling individual scattering elements' phase shifts.
Due to the large size of scattering elements, the passive beamforming is typically challenged by the high computational complexity.
In this article, we focus on machine learning (ML) approaches for performance in IRS-assisted wireless networks.
arXiv Detail & Related papers (2020-08-29T08:39:43Z) - Automatically Learning Compact Quality-aware Surrogates for Optimization
Problems [55.94450542785096]
Solving optimization problems with unknown parameters requires learning a predictive model to predict the values of the unknown parameters and then solving the problem using these values.
Recent work has shown that including the optimization problem as a layer in a complex training model pipeline results in predictions of iteration of unobserved decision making.
We show that we can improve solution quality by learning a low-dimensional surrogate model of a large optimization problem.
arXiv Detail & Related papers (2020-06-18T19:11:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.