What's Wrong with Deep Learning in Tree Search for Combinatorial
Optimization
- URL: http://arxiv.org/abs/2201.10494v1
- Date: Tue, 25 Jan 2022 17:37:34 GMT
- Title: What's Wrong with Deep Learning in Tree Search for Combinatorial
Optimization
- Authors: Maximilian B\"other, Otto Ki{\ss}ig, Martin Taraz, Sarel Cohen, Karen
Seidel, Tobias Friedrich
- Abstract summary: We present an open-source benchmark suite for the NP-hard Maximum Independent Set problem, in both its weighted and unweighted variants.
We also conduct an in-depth analysis of the popular guided tree search algorithm by Li et al. [NeurIPS 2018], testing various configurations on small and large synthetic and real-world graphs.
We show that the graph convolution network used in the tree search does not learn a meaningful representation of the solution structure, and can in fact be replaced by random values.
- Score: 8.879790406465556
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combinatorial optimization lies at the core of many real-world problems.
Especially since the rise of graph neural networks (GNNs), the deep learning
community has been developing solvers that derive solutions to NP-hard problems
by learning the problem-specific solution structure. However, reproducing the
results of these publications proves to be difficult. We make three
contributions. First, we present an open-source benchmark suite for the NP-hard
Maximum Independent Set problem, in both its weighted and unweighted variants.
The suite offers a unified interface to various state-of-the-art traditional
and machine learning-based solvers. Second, using our benchmark suite, we
conduct an in-depth analysis of the popular guided tree search algorithm by Li
et al. [NeurIPS 2018], testing various configurations on small and large
synthetic and real-world graphs. By re-implementing their algorithm with a
focus on code quality and extensibility, we show that the graph convolution
network used in the tree search does not learn a meaningful representation of
the solution structure, and can in fact be replaced by random values. Instead,
the tree search relies on algorithmic techniques like graph kernelization to
find good solutions. Thus, the results from the original publication are not
reproducible. Third, we extend the analysis to compare the tree search
implementations to other solvers, showing that the classical algorithmic
solvers often are faster, while providing solutions of similar quality.
Additionally, we analyze a recent solver based on reinforcement learning and
observe that for this solver, the GNN is responsible for the competitive
solution quality.
Related papers
- Assessing and Enhancing Graph Neural Networks for Combinatorial Optimization: Novel Approaches and Application in Maximum Independent Set Problems [0.0]
Graph Neural Networks (GNNs) show promise for researchers in solving Combinatorial optimization (CO) problems.
This study investigates the effectiveness of GNNs in solving the maximum independent set (MIS) problem.
arXiv Detail & Related papers (2024-11-06T09:12:31Z) - A Benchmark for Maximum Cut: Towards Standardization of the Evaluation of Learned Heuristics for Combinatorial Optimization [12.016449555335976]
We propose an open-source benchmark suite MaxCut-Bench dedicated to the NP-hard Maximum Cut problem in both its weighted and unweighted variants.
We use the benchmark in an attempt to systematically corroborate or reproduce the results of several, popular learning-based approaches.
Our results show that several of the learneds fail to outperform a naive greedy algorithm, and that only one of them consistently outperforms Tabu Search.
arXiv Detail & Related papers (2024-06-14T19:44:23Z) - Decision-focused Graph Neural Networks for Combinatorial Optimization [62.34623670845006]
An emerging strategy to tackle optimization problems involves the adoption of graph neural networks (GNNs) as an alternative to traditional algorithms.
Despite the growing popularity of GNNs and traditional algorithm solvers in the realm of CO, there is limited research on their integrated use and the correlation between them within an end-to-end framework.
We introduce a decision-focused framework that utilizes GNNs to address CO problems with auxiliary support.
arXiv Detail & Related papers (2024-06-05T22:52:27Z) - Learning To Dive In Branch And Bound [95.13209326119153]
We propose L2Dive to learn specific diving structurals with graph neural networks.
We train generative models to predict variable assignments and leverage the duality of linear programs to make diving decisions.
arXiv Detail & Related papers (2023-01-24T12:01:45Z) - Learning Branching Heuristics from Graph Neural Networks [1.4660170768702356]
We first propose a new graph neural network (GNN) model designed using a probabilistic method.
Our approach introduces a new way of applying GNNs towards enhancing the classical backtracking algorithm used in AI.
arXiv Detail & Related papers (2022-11-26T00:01:01Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Learning to Prune Instances of Steiner Tree Problem in Graphs [0.47138177023764655]
We consider the Steiner tree problem on graphs where we are given a set of nodes.
The goal is to find a tree sub-graph that contains all nodes in the given set, potentially including additional nodes.
arXiv Detail & Related papers (2022-08-25T10:31:00Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Learning to Accelerate Heuristic Searching for Large-Scale Maximum
Weighted b-Matching Problems in Online Advertising [51.97494906131859]
Bipartite b-matching is fundamental in algorithm design, and has been widely applied into economic markets, labor markets, etc.
Existing exact and approximate algorithms usually fail in such settings due to either requiring intolerable running time or too much computation resource.
We propose textttNeuSearcher which leverages the knowledge learned from previously instances to solve new problem instances.
arXiv Detail & Related papers (2020-05-09T02:48:23Z) - Physarum Powered Differentiable Linear Programming Layers and
Applications [48.77235931652611]
We propose an efficient and differentiable solver for general linear programming problems.
We show the use of our solver in a video segmentation task and meta-learning for few-shot learning.
arXiv Detail & Related papers (2020-04-30T01:50:37Z) - GLSearch: Maximum Common Subgraph Detection via Learning to Search [33.9052190473029]
We propose GLSearch, a Graph Neural Network (GNN) based learning to search model.
Our model is built upon the branch and bound bound, which selects one pair of nodes from the two input graphs to expand at a time.
Our GLSearch can be potentially extended to solve many other problems with constraints on graphs.
arXiv Detail & Related papers (2020-02-08T10:03:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.