Unsupervised Training for Neural TSP Solver
- URL: http://arxiv.org/abs/2207.13667v1
- Date: Wed, 27 Jul 2022 17:33:29 GMT
- Title: Unsupervised Training for Neural TSP Solver
- Authors: El\=iza Gaile, Andis Draguns, Em\=ils Ozoli\c{n}\v{s}, and K\=arlis
Freivalds
- Abstract summary: We introduce a novel unsupervised learning approach for solving the travelling salesman problem.
We use a relaxation of an integer linear program for TSP to construct a loss function that does not require correct instance labels.
Our approach is also more stable and easier to train than reinforcement learning.
- Score: 2.9398911304923447
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There has been a growing number of machine learning methods for approximately
solving the travelling salesman problem. However, these methods often require
solved instances for training or use complex reinforcement learning approaches
that need a large amount of tuning. To avoid these problems, we introduce a
novel unsupervised learning approach. We use a relaxation of an integer linear
program for TSP to construct a loss function that does not require correct
instance labels. With variable discretization, its minimum coincides with the
optimal or near-optimal solution. Furthermore, this loss function is
differentiable and thus can be used to train neural networks directly. We use
our loss function with a Graph Neural Network and design controlled experiments
on both Euclidean and asymmetric TSP. Our approach has the advantage over
supervised learning of not requiring large labelled datasets. In addition, the
performance of our approach surpasses reinforcement learning for asymmetric TSP
and is comparable to reinforcement learning for Euclidean instances. Our
approach is also more stable and easier to train than reinforcement learning.
Related papers
- Optimization Proxies using Limited Labeled Data and Training Time -- A Semi-Supervised Bayesian Neural Network Approach [2.943640991628177]
Constrained optimization problems arise in various engineering system operations such as inventory management electric power grids.
This work introduces a learning scheme using Bayesian Networks (BNNs) to solve constrained optimization problems under limited data and restricted model times.
We show that the proposed learning method outperforms conventional BNN and deep neural network (DNN) architectures.
arXiv Detail & Related papers (2024-10-04T02:10:20Z) - Training Artificial Neural Networks by Coordinate Search Algorithm [0.20971479389679332]
We propose an efficient version of the gradient-free Coordinate Search (CS) algorithm for training neural networks.
The proposed algorithm can be used with non-differentiable activation functions and tailored to multi-objective/multi-loss problems.
Finding the optimal values for weights of ANNs is a large-scale optimization problem.
arXiv Detail & Related papers (2024-02-20T01:47:25Z) - Towards a Better Theoretical Understanding of Independent Subnetwork Training [56.24689348875711]
We take a closer theoretical look at Independent Subnetwork Training (IST)
IST is a recently proposed and highly effective technique for solving the aforementioned problems.
We identify fundamental differences between IST and alternative approaches, such as distributed methods with compressed communication.
arXiv Detail & Related papers (2023-06-28T18:14:22Z) - Decouple Graph Neural Networks: Train Multiple Simple GNNs Simultaneously Instead of One [60.5818387068983]
Graph neural networks (GNN) suffer from severe inefficiency.
We propose to decouple a multi-layer GNN as multiple simple modules for more efficient training.
We show that the proposed framework is highly efficient with reasonable performance.
arXiv Detail & Related papers (2023-04-20T07:21:32Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Unsupervised Optimal Power Flow Using Graph Neural Networks [172.33624307594158]
We use a graph neural network to learn a nonlinear parametrization between the power demanded and the corresponding allocation.
We show through simulations that the use of GNNs in this unsupervised learning context leads to solutions comparable to standard solvers.
arXiv Detail & Related papers (2022-10-17T17:30:09Z) - Travel the Same Path: A Novel TSP Solving Strategy [0.0]
We consider the imitation learning framework, which helps a deterministic algorithm making good choices whenever it needs to.
We demonstrate a strong generalization ability of a graph neural network trained under the imitation learning framework.
arXiv Detail & Related papers (2022-10-12T03:56:37Z) - TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent
Kernels [141.29156234353133]
State-of-the-art convex learning methods can perform far worse than their centralized counterparts when clients have dissimilar data distributions.
We show this disparity can largely be attributed to challenges presented by non-NISTity.
We propose a Train-Convexify neural network (TCT) procedure to sidestep this issue.
arXiv Detail & Related papers (2022-07-13T16:58:22Z) - Simple Stochastic and Online Gradient DescentAlgorithms for Pairwise
Learning [65.54757265434465]
Pairwise learning refers to learning tasks where the loss function depends on a pair instances.
Online descent (OGD) is a popular approach to handle streaming data in pairwise learning.
In this paper, we propose simple and online descent to methods for pairwise learning.
arXiv Detail & Related papers (2021-11-23T18:10:48Z) - Deep learning for inverse problems with unknown operator [0.0]
In inverse problems where the forward operator $T$ is unknown, we have access to training data consisting of functions $f_i$ and their noisy images $Tf_i$.
We propose a new method that requires minimal assumptions on the data, and prove reconstruction rates that depend on the number of training points and the noise level.
arXiv Detail & Related papers (2021-08-05T17:21:12Z) - Learning to Sparsify Travelling Salesman Problem Instances [0.5985204759362747]
We use a pruning machine learning as a pre-processing step followed by an exact Programming approach to sparsify the travelling salesman problem.
Our learning approach requires very little training data and is amenable to mathematical analysis.
arXiv Detail & Related papers (2021-04-19T14:35:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.