Revisit the Algorithm Selection Problem for TSP with Spatial Information
Enhanced Graph Neural Networks
- URL: http://arxiv.org/abs/2302.04035v1
- Date: Wed, 8 Feb 2023 13:14:20 GMT
- Title: Revisit the Algorithm Selection Problem for TSP with Spatial Information
Enhanced Graph Neural Networks
- Authors: Ya Song, Laurens Bliek, Yingqian Zhang
- Abstract summary: This paper revisits the algorithm selection problem for Euclidean Traveling Salesman Problem (TSP)
We propose a novel Graph Neural Network (GNN), called GINES.
GINES takes the coordinates of cities and distances between cities as input.
It is better than the traditional handcrafted feature-based approach on one dataset.
- Score: 4.084365114504618
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Algorithm selection is a well-known problem where researchers investigate how
to construct useful features representing the problem instances and then apply
feature-based machine learning models to predict which algorithm works best
with the given instance. However, even for simple optimization problems such as
Euclidean Traveling Salesman Problem (TSP), there lacks a general and effective
feature representation for problem instances. The important features of TSP are
relatively well understood in the literature, based on extensive domain
knowledge and post-analysis of the solutions. In recent years, Convolutional
Neural Network (CNN) has become a popular approach to select algorithms for
TSP. Compared to traditional feature-based machine learning models, CNN has an
automatic feature-learning ability and demands less domain expertise. However,
it is still required to generate intermediate representations, i.e., multiple
images to represent TSP instances first. In this paper, we revisit the
algorithm selection problem for TSP, and propose a novel Graph Neural Network
(GNN), called GINES. GINES takes the coordinates of cities and distances
between cities as input. It is composed of a new message-passing mechanism and
a local neighborhood feature extractor to learn spatial information of TSP
instances. We evaluate GINES on two benchmark datasets. The results show that
GINES outperforms CNN and the original GINE models. It is better than the
traditional handcrafted feature-based approach on one dataset. The code and
dataset will be released in the final version of this paper.
Related papers
- Unveiling the Power of Sparse Neural Networks for Feature Selection [60.50319755984697]
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
We show that SNNs trained with dynamic sparse training (DST) algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
arXiv Detail & Related papers (2024-08-08T16:48:33Z) - Ensemble Quadratic Assignment Network for Graph Matching [52.20001802006391]
Graph matching is a commonly used technique in computer vision and pattern recognition.
Recent data-driven approaches have improved the graph matching accuracy remarkably.
We propose a graph neural network (GNN) based approach to combine the advantages of data-driven and traditional methods.
arXiv Detail & Related papers (2024-03-11T06:34:05Z) - Sample Complexity of Algorithm Selection Using Neural Networks and Its Applications to Branch-and-Cut [1.4624458429745086]
We build upon recent work in this line of research by considering the setup where, instead of selecting a single algorithm that has the best performance, we allow the possibility of selecting an algorithm based on the instance to be solved.
In particular, given a representative sample of instances, we learn a neural network that maps an instance of the problem to the most appropriate algorithm for that instance.
In other words, the neural network will take as input a mixed-integer optimization instance and output a decision that will result in a small branch-and-cut tree for that instance.
arXiv Detail & Related papers (2024-02-04T03:03:27Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Automated Algorithm Selection: from Feature-Based to Feature-Free
Approaches [0.5801044612920815]
We propose a novel technique for algorithm-selection, applicable to optimisation in which there is implicit sequential information encapsulated in the data.
We train two types of recurrent neural networks to predict a packing in online bin-packing, selecting from four well-known domains.
arXiv Detail & Related papers (2022-03-24T23:59:50Z) - Learning to Sparsify Travelling Salesman Problem Instances [0.5985204759362747]
We use a pruning machine learning as a pre-processing step followed by an exact Programming approach to sparsify the travelling salesman problem.
Our learning approach requires very little training data and is amenable to mathematical analysis.
arXiv Detail & Related papers (2021-04-19T14:35:14Z) - Variational models for signal processing with Graph Neural Networks [3.5939555573102853]
This paper is devoted to signal processing on point-clouds by means of neural networks.
In this work, we investigate the use of variational models for such Graph Neural Networks to process signals on graphs for unsupervised learning.
arXiv Detail & Related papers (2021-03-30T13:31:11Z) - Solving Mixed Integer Programs Using Neural Networks [57.683491412480635]
This paper applies learning to the two key sub-tasks of a MIP solver, generating a high-quality joint variable assignment, and bounding the gap in objective value between that assignment and an optimal one.
Our approach constructs two corresponding neural network-based components, Neural Diving and Neural Branching, to use in a base MIP solver such as SCIP.
We evaluate our approach on six diverse real-world datasets, including two Google production datasets and MIPLIB, by training separate neural networks on each.
arXiv Detail & Related papers (2020-12-23T09:33:11Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - Deep Learning as a Competitive Feature-Free Approach for Automated
Algorithm Selection on the Traveling Salesperson Problem [0.0]
We focus on the well-known Euclidean Traveling Salesperson Problem (TSP)
We evolve instances with 1,000 nodes where the solvers show strongly different performance profiles.
We show that a feature-free deep neural network based approach solely based on visual representation of the instances already matches classical AS model results.
arXiv Detail & Related papers (2020-06-29T12:15:35Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.