Cracking nuts with a sledgehammer: when modern graph neural networks do
worse than classical greedy algorithms
- URL: http://arxiv.org/abs/2206.13211v1
- Date: Mon, 27 Jun 2022 11:54:01 GMT
- Title: Cracking nuts with a sledgehammer: when modern graph neural networks do
worse than classical greedy algorithms
- Authors: Maria Chiara Angelini, Federico Ricci-Tersenghi
- Abstract summary: We show that a simple greedy algorithm, running in almost linear time, can find solutions for the MIS problem of much better quality than the GNN.
We do not see any good reason for solving the MIS with these GNN, as well as for using a sledgehammer to crack nuts.
- Score: 2.436681150766912
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The recent work ``Combinatorial Optimization with Physics-Inspired Graph
Neural Networks'' [Nat Mach Intell 4 (2022) 367] introduces a physics-inspired
unsupervised Graph Neural Network (GNN) to solve combinatorial optimization
problems on sparse graphs. To test the performances of these GNNs, the authors
of the work show numerical results for two fundamental problems: maximum cut
and maximum independent set (MIS). They conclude that "the graph neural network
optimizer performs on par or outperforms existing solvers, with the ability to
scale beyond the state of the art to problems with millions of variables."
In this comment, we show that a simple greedy algorithm, running in almost
linear time, can find solutions for the MIS problem of much better quality than
the GNN. The greedy algorithm is faster by a factor of $10^4$ with respect to
the GNN for problems with a million variables. We do not see any good reason
for solving the MIS with these GNN, as well as for using a sledgehammer to
crack nuts.
In general, many claims of superiority of neural networks in solving
combinatorial problems are at risk of being not solid enough, since we lack
standard benchmarks based on really hard problems. We propose one of such hard
benchmarks, and we hope to see future neural network optimizers tested on these
problems before any claim of superiority is made.
Related papers
- LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - Combinatorial Optimization with Automated Graph Neural Networks [28.19349828026972]
We present a new class of textbfAUTOmated textbfGNNs for solving NP-hard CO problems, namely textbfAutoGNP.
The idea of AutoGNP is to use graph neural architecture search algorithms to automatically find the best GNNs for a given NP-hard optimization problem.
arXiv Detail & Related papers (2024-06-05T02:43:41Z) - Towards a General GNN Framework for Combinatorial Optimization [14.257210124854863]
We introduce a novel GNN architecture which leverages a complex filter bank and localized attention mechanisms designed to solve CO problems on graphs.
We show how our method differentiates itself from prior GNN-based CO solvers and how it can be effectively applied to the maximum clique, minimum dominating set, and maximum cut problems.
arXiv Detail & Related papers (2024-05-31T00:02:07Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - Inability of a graph neural network heuristic to outperform greedy
algorithms in solving combinatorial optimization problems like Max-Cut [0.0]
In Nature Machine Intelligence 4, 367 (2022), Schuetz et al provide a scheme to employ neural graph networks (GNN) to solve a variety of classical, NP-hard optimization problems.
It describes how the network is trained on sample instances and the resulting GNN is evaluated applying widely used techniques to determine its ability to succeed.
However, closer inspection shows that the reported results for this GNN are only minutely better than those for gradient descent and get outperformed by a greedy algorithm.
arXiv Detail & Related papers (2022-10-02T20:50:33Z) - Graph Neural Network Bandits [89.31889875864599]
We consider the bandit optimization problem with the reward function defined over graph-structured data.
Key challenges in this setting are scaling to large domains, and to graphs with many nodes.
We show that graph neural networks (GNNs) can be used to estimate the reward function.
arXiv Detail & Related papers (2022-07-13T18:12:36Z) - Can Graph Neural Networks Learn to Solve MaxSAT Problem? [22.528432249712637]
We study the capability of GNNs in learning to solve MaxSAT problem, both from theoretical and practical perspectives.
We build two kinds of GNN models to learn the solution of MaxSAT instances from benchmarks, and show that GNNs have attractive potential to solve MaxSAT problem through experimental evaluation.
We also present a theoretical explanation of the effect that GNNs can learn to solve MaxSAT problem to some extent for the first time, based on the algorithmic alignment theory.
arXiv Detail & Related papers (2021-11-15T07:33:33Z) - Very Deep Graph Neural Networks Via Noise Regularisation [57.450532911995516]
Graph Neural Networks (GNNs) perform learned message passing over an input graph.
We train a deep GNN with up to 100 message passing steps and achieve several state-of-the-art results.
arXiv Detail & Related papers (2021-06-15T08:50:10Z) - The Surprising Power of Graph Neural Networks with Random Node
Initialization [54.4101931234922]
Graph neural networks (GNNs) are effective models for representation learning on relational data.
Standard GNNs are limited in their expressive power, as they cannot distinguish beyond the capability of the Weisfeiler-Leman graph isomorphism.
In this work, we analyze the expressive power of GNNs with random node (RNI)
We prove that these models are universal, a first such result for GNNs not relying on computationally demanding higher-order properties.
arXiv Detail & Related papers (2020-10-02T19:53:05Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z) - Random Features Strengthen Graph Neural Networks [40.60905158071766]
Graph neural networks (GNNs) are powerful machine learning models for various graph learning tasks.
In this paper, we demonstrate that GNNs become powerful just by adding a random feature to each node.
We show that the addition of random features enables GNNs to solve various problems that normal GNNs, including the graph convolutional networks (GCNs) and graph isomorphism networks (GINs) cannot solve.
arXiv Detail & Related papers (2020-02-08T12:47:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.