A New Constructive Heuristic driven by Machine Learning for the
Traveling Salesman Problem
- URL: http://arxiv.org/abs/2108.10224v1
- Date: Tue, 17 Aug 2021 21:37:23 GMT
- Title: A New Constructive Heuristic driven by Machine Learning for the
Traveling Salesman Problem
- Authors: Umberto Junior Mele, Luca Maria Gambardella and Roberto Montemanni
- Abstract summary: Recent systems applying Machine Learning (ML) to solve the Traveling Salesman Problem (TSP) exhibit issues when they try to scale up to real case scenarios.
The use of Candidate Lists (CLs) has been brought up to cope with the issues.
In this work we instead use a machine learning model to confirm the addition in the solution just for high probable edges.
- Score: 8.604882842499212
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent systems applying Machine Learning (ML) to solve the Traveling Salesman
Problem (TSP) exhibit issues when they try to scale up to real case scenarios
with several hundred vertices. The use of Candidate Lists (CLs) has been
brought up to cope with the issues. The procedure allows to restrict the search
space during solution creation, consequently reducing the solver computational
burden. So far, ML were engaged to create CLs and values on the edges of these
CLs expressing ML preferences at solution insertion. Although promising, these
systems do not clearly restrict what the ML learns and does to create
solutions, bringing with them some generalization issues. Therefore, motivated
by exploratory and statistical studies, in this work we instead use a machine
learning model to confirm the addition in the solution just for high probable
edges. CLs of the high probable edge are employed as input, and the ML is in
charge of distinguishing cases where such edges are in the optimal solution
from those where they are not. . This strategy enables a better generalization
and creates an efficient balance between machine learning and searching
techniques. Our ML-Constructive heuristic is trained on small instances. Then,
it is able to produce solutions, without losing quality, to large problems as
well. We compare our results with classic constructive heuristics, showing good
performances for TSPLIB instances up to 1748 cities. Although our heuristic
exhibits an expensive constant time operation, we proved that the computational
complexity in worst-case scenario, for the solution construction after
training, is $O(n^2 \log n^2)$, being $n$ the number of vertices in the TSP
instance.
Related papers
- Skipping Computations in Multimodal LLMs [63.29737699997859]
This study investigates redundancy in Multimodal Large Language Models (MLLMs) during inference.
We propose different methods to skip computations, such as skipping entire blocks, FFN or self-attention layers.
Our findings validate that significant amount of computations can be avoided at inference time.
arXiv Detail & Related papers (2024-10-12T09:21:45Z) - Can Large Language Models Play Games? A Case Study of A Self-Play
Approach [61.15761840203145]
Large Language Models (LLMs) harness extensive data from the Internet, storing a broad spectrum of prior knowledge.
Monte-Carlo Tree Search (MCTS) is a search algorithm that provides reliable decision-making solutions.
This work introduces an innovative approach that bolsters LLMs with MCTS self-play to efficiently resolve turn-based zero-sum games.
arXiv Detail & Related papers (2024-03-08T19:16:29Z) - Leaving the Nest: Going Beyond Local Loss Functions for
Predict-Then-Optimize [57.22851616806617]
We show that our method achieves state-of-the-art results in four domains from the literature.
Our approach outperforms the best existing method by nearly 200% when the localness assumption is broken.
arXiv Detail & Related papers (2023-05-26T11:17:45Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Fast Continuous and Integer L-shaped Heuristics Through Supervised
Learning [4.521119623956821]
We propose a methodology to accelerate the solution of mixed-integer linear two-stage programs.
We aim at solving problems where the second stage is highly demanding.
Our core idea is to gain large reductions in online solution time while incurring small reductions in first-stage solution accuracy.
arXiv Detail & Related papers (2022-05-02T13:15:32Z) - The Machine Learning for Combinatorial Optimization Competition (ML4CO):
Results and Insights [59.93939636422896]
The ML4CO aims at improving state-of-the-art optimization solvers by replacing key components.
The competition featured three challenging tasks: finding the best feasible solution, producing the tightest optimality certificate, and giving an appropriate routing configuration.
arXiv Detail & Related papers (2022-03-04T17:06:00Z) - Machine Learning Methods in Solving the Boolean Satisfiability Problem [72.21206588430645]
The paper reviews the recent literature on solving the Boolean satisfiability problem (SAT) with machine learning techniques.
We examine the evolving ML-SAT solvers from naive classifiers with handcrafted features to the emerging end-to-end SAT solvers such as NeuroSAT.
arXiv Detail & Related papers (2022-03-02T05:14:12Z) - Minimizing Entropy to Discover Good Solutions to Recurrent Mixed Integer
Programs [0.0]
Current solvers for mixed-integer programming (MIP) problems are designed to perform well on a wide range of problems.
Recent works have shown that machine learning (ML) can be integrated with an MIP solver to inject domain knowledge and efficiently close the optimality gap.
This paper proposes an online solver that uses the notion of entropy to efficiently build a model with minimal training data and tuning.
arXiv Detail & Related papers (2022-02-07T18:52:56Z) - Learning to Sparsify Travelling Salesman Problem Instances [0.5985204759362747]
We use a pruning machine learning as a pre-processing step followed by an exact Programming approach to sparsify the travelling salesman problem.
Our learning approach requires very little training data and is amenable to mathematical analysis.
arXiv Detail & Related papers (2021-04-19T14:35:14Z) - Efficient time stepping for numerical integration using reinforcement
learning [0.15393457051344295]
We propose a data-driven time stepping scheme based on machine learning and meta-learning.
First, one or several (in the case of non-smooth or hybrid systems) base learners are trained using RL.
Then, a meta-learner is trained which (depending on the system state) selects the base learner that appears to be optimal for the current situation.
arXiv Detail & Related papers (2021-04-08T07:24:54Z) - Graph Minors Meet Machine Learning: the Power of Obstructions [0.90238471756546]
We show the utility of using obstructions for training neural networks.
Experiments show that training with obstructions results in a huge reduction in number of iterations needed for convergence.
arXiv Detail & Related papers (2020-06-08T15:40:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.