How Good Is Neural Combinatorial Optimization? A Systematic Evaluation
on the Traveling Salesman Problem
- URL: http://arxiv.org/abs/2209.10913v2
- Date: Wed, 12 Apr 2023 09:12:26 GMT
- Title: How Good Is Neural Combinatorial Optimization? A Systematic Evaluation
on the Traveling Salesman Problem
- Authors: Shengcai Liu, Yu Zhang, Ke Tang, Xin Yao
- Abstract summary: This work presents a comprehensive comparative study of neural optimization solvers and alternative solvers.
Our results show that the solvers learned by NCO approaches, in general, still fall short of traditional solvers in nearly all these aspects.
- Score: 31.451338706630583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional solvers for tackling combinatorial optimization (CO) problems are
usually designed by human experts. Recently, there has been a surge of interest
in utilizing deep learning, especially deep reinforcement learning, to
automatically learn effective solvers for CO. The resultant new paradigm is
termed neural combinatorial optimization (NCO). However, the advantages and
disadvantages of NCO relative to other approaches have not been empirically or
theoretically well studied. This work presents a comprehensive comparative
study of NCO solvers and alternative solvers. Specifically, taking the
traveling salesman problem as the testbed problem, the performance of the
solvers is assessed in five aspects, i.e., effectiveness, efficiency,
stability, scalability, and generalization ability. Our results show that the
solvers learned by NCO approaches, in general, still fall short of traditional
solvers in nearly all these aspects. A potential benefit of NCO solvers would
be their superior time and energy efficiency for small-size problem instances
when sufficient training instances are available. Hopefully, this work would
help with a better understanding of the strengths and weaknesses of NCO and
provide a comprehensive evaluation protocol for further benchmarking NCO
approaches in comparison to other approaches.
Related papers
- Learning to Solve Combinatorial Optimization under Positive Linear Constraints via Non-Autoregressive Neural Networks [103.78912399195005]
Combinatorial optimization (CO) is the fundamental problem at the intersection of computer science, applied mathematics, etc.
In this paper, we design a family of non-autoregressive neural networks to solve CO problems under positive linear constraints.
We validate the effectiveness of this framework in solving representative CO problems including facility location, max-set covering, and traveling salesman problem.
arXiv Detail & Related papers (2024-09-06T14:58:31Z) - UDC: A Unified Neural Divide-and-Conquer Framework for Large-Scale Combinatorial Optimization Problems [8.871356150316224]
Two-stage neural methods motivated by divide-and-conquer strategies have shown efficiency in addressing large-scale CO problems.
This article develops a unified neural divide-and-conquer framework (i.e., UDC) for solving general large-scale CO problems.
arXiv Detail & Related papers (2024-06-29T04:29:03Z) - Decision-focused Graph Neural Networks for Combinatorial Optimization [62.34623670845006]
An emerging strategy to tackle optimization problems involves the adoption of graph neural networks (GNNs) as an alternative to traditional algorithms.
Despite the growing popularity of GNNs and traditional algorithm solvers in the realm of CO, there is limited research on their integrated use and the correlation between them within an end-to-end framework.
We introduce a decision-focused framework that utilizes GNNs to address CO problems with auxiliary support.
arXiv Detail & Related papers (2024-06-05T22:52:27Z) - Neural Combinatorial Optimization Algorithms for Solving Vehicle Routing Problems: A Comprehensive Survey with Perspectives [14.47130974868925]
This survey aims to provide a comprehensive taxonomy of NCO solvers for VRPs.
We divide all NCO solvers into four distinct categories, namely Learning to Construct, Learning to Improve, Learning to Predict-Once, and Learning to Predict-Multiplicity solvers.
We present the inadequacies of the SOTA solvers, including poor generalization, incapability to solve large-scale VRPs, and difficulty in comparing these NCO solvers with the conventional Operations Research algorithms.
arXiv Detail & Related papers (2024-06-01T12:18:39Z) - An Efficient Learning-based Solver Comparable to Metaheuristics for the
Capacitated Arc Routing Problem [67.92544792239086]
We introduce an NN-based solver to significantly narrow the gap with advanced metaheuristics.
First, we propose direction-aware facilitating attention model (DaAM) to incorporate directionality into the embedding process.
Second, we design a supervised reinforcement learning scheme that involves supervised pre-training to establish a robust initial policy.
arXiv Detail & Related papers (2024-03-11T02:17:42Z) - Neural Algorithmic Reasoning for Combinatorial Optimisation [20.36694807847833]
We propose leveraging recent advancements in neural reasoning to improve the learning of CO problems.
We suggest pre-training our neural model on relevant algorithms before training it on CO instances.
Our results demonstrate that by using this learning setup, we achieve superior performance compared to non-algorithmically informed deep learning models.
arXiv Detail & Related papers (2023-05-18T13:59:02Z) - Learning to Optimize for Reinforcement Learning [58.01132862590378]
Reinforcement learning (RL) is essentially different from supervised learning, and in practice, these learneds do not work well even in simple RL tasks.
Agent-gradient distribution is non-independent and identically distributed, leading to inefficient meta-training.
We show that, although only trained in toy tasks, our learned can generalize unseen complex tasks in Brax.
arXiv Detail & Related papers (2023-02-03T00:11:02Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Sym-NCO: Leveraging Symmetricity for Neural Combinatorial Optimization [16.127824824652077]
Deep reinforcement learning (DRL)-based optimization (CO) methods have shown significant merit over the conventional CO solvers.
This paper presents a novel training scheme, Sym-NCO, that achieves significant performance increments to existing DRL-NCO methods.
arXiv Detail & Related papers (2022-05-26T07:55:43Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - POMO: Policy Optimization with Multiple Optima for Reinforcement
Learning [8.819672165548477]
We introduce Policy Optimization with Multiple Optima (POMO), an end-to-end approach for building such a solver.
POMO is applicable to a wide range of CO problems. It is designed to exploit the symmetries in the representation of a CO solution.
We demonstrate the effectiveness of POMO by solving three popular NP-hard problems, namely, traveling salesman (TSP), capacitated vehicle routing (CVRP), and 0-1 knapsack (KP)
arXiv Detail & Related papers (2020-10-30T00:57:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.