Learning to Sparsify Travelling Salesman Problem Instances
- URL: http://arxiv.org/abs/2104.09345v1
- Date: Mon, 19 Apr 2021 14:35:14 GMT
- Title: Learning to Sparsify Travelling Salesman Problem Instances
- Authors: James Fitzpatrick, Deepak Ajwani and Paula Carroll
- Abstract summary: We use a pruning machine learning as a pre-processing step followed by an exact Programming approach to sparsify the travelling salesman problem.
Our learning approach requires very little training data and is amenable to mathematical analysis.
- Score: 0.5985204759362747
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In order to deal with the high development time of exact and approximation
algorithms for NP-hard combinatorial optimisation problems and the high running
time of exact solvers, deep learning techniques have been used in recent years
as an end-to-end approach to find solutions. However, there are issues of
representation, generalisation, complex architectures, interpretability of
models for mathematical analysis etc. using deep learning techniques. As a
compromise, machine learning can be used to improve the run time performance of
exact algorithms in a matheuristics framework. In this paper, we use a pruning
heuristic leveraging machine learning as a pre-processing step followed by an
exact Integer Programming approach. We apply this approach to sparsify
instances of the classical travelling salesman problem. Our approach learns
which edges in the underlying graph are unlikely to belong to an optimal
solution and removes them, thus sparsifying the graph and significantly
reducing the number of decision variables. We use carefully selected features
derived from linear programming relaxation, cutting planes exploration,
minimum-weight spanning tree heuristics and various other local and statistical
analysis of the graph. Our learning approach requires very little training data
and is amenable to mathematical analysis. We demonstrate that our approach can
reliably prune a large fraction of the variables in TSP instances from
TSPLIB/MATILDA (>85%$) while preserving most of the optimal tour edges. Our
approach can successfully prune problem instances even if they lie outside the
training distribution, resulting in small optimality gaps between the pruned
and original problems in most cases. Using our learning technique, we discover
novel heuristics for sparsifying TSP instances, that may be of independent
interest for variants of the vehicle routing problem.
Related papers
- Training Artificial Neural Networks by Coordinate Search Algorithm [0.20971479389679332]
We propose an efficient version of the gradient-free Coordinate Search (CS) algorithm for training neural networks.
The proposed algorithm can be used with non-differentiable activation functions and tailored to multi-objective/multi-loss problems.
Finding the optimal values for weights of ANNs is a large-scale optimization problem.
arXiv Detail & Related papers (2024-02-20T01:47:25Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Neural Weighted A*: Learning Graph Costs and Heuristics with
Differentiable Anytime A* [12.117737635879037]
Recent works related to data-driven planning aim at learning either cost functions or planner functions, but not both.
We propose Neural Weighted A*, a differentiable anytime planner able to produce improved representations of planar maps as graph costs and planners.
We experimentally show the validity of our claims by testing Neural Weighted A* against several baselines, introducing a novel, tile-based navigation dataset.
arXiv Detail & Related papers (2021-05-04T13:17:30Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Physarum Powered Differentiable Linear Programming Layers and
Applications [48.77235931652611]
We propose an efficient and differentiable solver for general linear programming problems.
We show the use of our solver in a video segmentation task and meta-learning for few-shot learning.
arXiv Detail & Related papers (2020-04-30T01:50:37Z) - Learning with Differentiable Perturbed Optimizers [54.351317101356614]
We propose a systematic method to transform operations into operations that are differentiable and never locally constant.
Our approach relies on perturbeds, and can be used readily together with existing solvers.
We show how this framework can be connected to a family of losses developed in structured prediction, and give theoretical guarantees for their use in learning tasks.
arXiv Detail & Related papers (2020-02-20T11:11:32Z) - Learning fine-grained search space pruning and heuristics for
combinatorial optimization [5.72274610208488]
We propose a framework for leveraging machine learning techniques to scale-up exact optimization algorithms.
Our framework learns the relatively simpler task of pruning the elements in order to reduce the size of the problem instances.
We show that our framework can prune a large fraction of the input graph and still detect almost all of the maximum cliques.
arXiv Detail & Related papers (2020-01-05T13:10:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.