Learning to Solve Travelling Salesman Problem with Hardness-adaptive
Curriculum
- URL: http://arxiv.org/abs/2204.03236v1
- Date: Thu, 7 Apr 2022 05:59:05 GMT
- Title: Learning to Solve Travelling Salesman Problem with Hardness-adaptive
Curriculum
- Authors: Zeyang Zhang, Ziwei Zhang, Xin Wang, Wenwu Zhu
- Abstract summary: We propose a hardness-adaptive generator to generate instances ten times harder than the existing methods.
Our proposed method achieves significant improvement over state-of-the-art models in terms of the optimality gap.
- Score: 42.28343071442175
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Various neural network models have been proposed to tackle combinatorial
optimization problems such as the travelling salesman problem (TSP). Existing
learning-based TSP methods adopt a simple setting that the training and testing
data are independent and identically distributed. However, the existing
literature fails to solve TSP instances when training and testing data have
different distributions. Concretely, we find that different training and
testing distribution will result in more difficult TSP instances, i.e., the
solution obtained by the model has a large gap from the optimal solution. To
tackle this problem, in this work, we study learning-based TSP methods when
training and testing data have different distributions using adaptive-hardness,
i.e., how difficult a TSP instance can be for a solver. This problem is
challenging because it is non-trivial to (1) define hardness measurement
quantitatively; (2) efficiently and continuously generate sufficiently hard TSP
instances upon model training; (3) fully utilize instances with different
levels of hardness to learn a more powerful TSP solver. To solve these
challenges, we first propose a principled hardness measurement to quantify the
hardness of TSP instances. Then, we propose a hardness-adaptive generator to
generate instances with different hardness. We further propose a curriculum
learner fully utilizing these instances to train the TSP solver. Experiments
show that our hardness-adaptive generator can generate instances ten times
harder than the existing methods, and our proposed method achieves significant
improvement over state-of-the-art models in terms of the optimality gap.
Related papers
- Looking Ahead to Avoid Being Late: Solving Hard-Constrained Traveling
Salesman Problem [36.88003015541172]
We propose a novel learning-based method that uses looking-ahead information as the feature to improve the legality of TSP with Time Windows (TSPTW) solutions.
With comprehensive experiments on diverse datasets, MUSLA outperforms existing baselines and shows generalizability potential.
arXiv Detail & Related papers (2024-03-08T13:49:21Z) - On Pitfalls of Test-Time Adaptation [82.8392232222119]
Test-Time Adaptation (TTA) has emerged as a promising approach for tackling the robustness challenge under distribution shifts.
We present TTAB, a test-time adaptation benchmark that encompasses ten state-of-the-art algorithms, a diverse array of distribution shifts, and two evaluation protocols.
arXiv Detail & Related papers (2023-06-06T09:35:29Z) - Unsupervised Training for Neural TSP Solver [2.9398911304923447]
We introduce a novel unsupervised learning approach for solving the travelling salesman problem.
We use a relaxation of an integer linear program for TSP to construct a loss function that does not require correct instance labels.
Our approach is also more stable and easier to train than reinforcement learning.
arXiv Detail & Related papers (2022-07-27T17:33:29Z) - Dynamic Contrastive Distillation for Image-Text Retrieval [90.05345397400144]
We present a novel plug-in dynamic contrastive distillation (DCD) framework to compress image-text retrieval models.
We successfully apply our proposed DCD strategy to two state-of-the-art vision-language pretrained models, i.e. ViLT and METER.
Experiments on MS-COCO and Flickr30K benchmarks show the effectiveness and efficiency of our DCD framework.
arXiv Detail & Related papers (2022-07-04T14:08:59Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z) - Learning Mixtures of Linear Dynamical Systems [94.49754087817931]
We develop a two-stage meta-algorithm to efficiently recover each ground-truth LDS model up to error $tildeO(sqrtd/T)$.
We validate our theoretical studies with numerical experiments, confirming the efficacy of the proposed algorithm.
arXiv Detail & Related papers (2022-01-26T22:26:01Z) - Learning to Optimise General TSP Instances [2.6782615615913348]
The Travelling Salesman Problem (TSP) is a classical optimisation problem.
Deep learning has been successfully extended to meta-learning, where previous solving efforts assist in learning how to optimise future instances.
This paper introduces a new learning-based approach to solve a variety of different and common TSP problems.
arXiv Detail & Related papers (2020-10-23T07:37:16Z) - Robust Optimal Transport with Applications in Generative Modeling and
Domain Adaptation [120.69747175899421]
Optimal Transport (OT) distances such as Wasserstein have been used in several areas such as GANs and domain adaptation.
We propose a computationally-efficient dual form of the robust OT optimization that is amenable to modern deep learning applications.
Our approach can train state-of-the-art GAN models on noisy datasets corrupted with outlier distributions.
arXiv Detail & Related papers (2020-10-12T17:13:40Z) - Towards Feature-free TSP Solver Selection: A Deep Learning Approach [35.05032046810422]
The Travelling Salesman Problem (TSP) is a classical NP-hard problem and has broad applications in many disciplines and industries.
In this paper, we propose a deep learning framework, CTAS, for TSP solver selection in an end-to-end manner.
arXiv Detail & Related papers (2020-06-01T04:48:36Z) - Generalization of Machine Learning for Problem Reduction: A Case Study
on Travelling Salesman Problems [5.370742369840518]
We show that a machine learning model can greedily remove decision variables from an optimization problem that are predicted not to be part of an optimal solution.
We consider three scenarios where training and test instances are different in terms of: 1) problem characteristics; 2) problem sizes; and 3) problem types.
While the accuracy of predicting unused variables naturally deteriorates as a test instance is further away from the training set, we observe that even when tested on a different TSP problem, the machine learning model still makes useful predictions.
arXiv Detail & Related papers (2020-05-12T15:09:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.