Graph Neural Networks are Dynamic Programmers
- URL: http://arxiv.org/abs/2203.15544v1
- Date: Tue, 29 Mar 2022 13:27:28 GMT
- Title: Graph Neural Networks are Dynamic Programmers
- Authors: Andrew Dudzik, Petar Veli\v{c}kovi\'c
- Abstract summary: Graph neural networks (GNNs) are claimed to align with dynamic programming (DP)
Here we show, using methods from theory and abstract algebra, that there exists an intricate connection between GNNs and DP.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in neural algorithmic reasoning with graph neural networks
(GNNs) are propped up by the notion of algorithmic alignment. Broadly, a neural
network will be better at learning to execute a reasoning task (in terms of
sample complexity) if its individual components align well with the target
algorithm. Specifically, GNNs are claimed to align with dynamic programming
(DP), a general problem-solving strategy which expresses many polynomial-time
algorithms. However, has this alignment truly been demonstrated and
theoretically quantified? Here we show, using methods from category theory and
abstract algebra, that there exists an intricate connection between GNNs and
DP, going well beyond the initial observations over individual algorithms such
as Bellman-Ford. Exposing this connection, we easily verify several prior
findings in the literature, and hope it will serve as a foundation for building
stronger algorithmically aligned GNNs.
Related papers
- Deep Equilibrium Algorithmic Reasoning [18.651333116786084]
We study neurally solving algorithms from a different perspective.
Since the algorithm's solution is often an equilibrium, it is possible to find the solution directly by solving an equilibrium equation.
Our approach requires no information on the ground-truth number of steps of the algorithm, both during train and test time.
arXiv Detail & Related papers (2024-10-19T10:40:55Z) - The Deep Equilibrium Algorithmic Reasoner [20.375241527453447]
We show that graph neural networks (GNNs) can learn to execute classical algorithms.
We conjecture and empirically validate that one can train a network to solve algorithmic problems by directly finding the equilibrium.
arXiv Detail & Related papers (2024-02-09T14:46:50Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - Towards Better Out-of-Distribution Generalization of Neural Algorithmic
Reasoning Tasks [51.8723187709964]
We study the OOD generalization of neural algorithmic reasoning tasks.
The goal is to learn an algorithm from input-output pairs using deep neural networks.
arXiv Detail & Related papers (2022-11-01T18:33:20Z) - The Logic of Graph Neural Networks [0.9355115132408681]
Graph neural networks (GNNs) are deep learning architectures for machine learning problems on graphs.
It has been shown that the expressiveness of GNNs can be characterised precisely by the Weisfeiler-Leman algorithms and by variable counting logics.
arXiv Detail & Related papers (2021-04-29T19:23:26Z) - Learning Graph Neural Networks with Approximate Gradient Descent [24.49427608361397]
Two types of graph neural networks (GNNs) are investigated, depending on whether labels are attached to nodes or graphs.
A comprehensive framework for designing and analyzing convergence of GNN training algorithms is developed.
The proposed algorithm guarantees a linear convergence rate to the underlying true parameters of GNNs.
arXiv Detail & Related papers (2020-12-07T02:54:48Z) - Learning to Execute Programs with Instruction Pointer Attention Graph
Neural Networks [55.98291376393561]
Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks.
Recurrent neural networks (RNNs) are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure.
We introduce a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which improves systematic generalization on the task of learning to execute programs.
arXiv Detail & Related papers (2020-10-23T19:12:30Z) - Fast Learning of Graph Neural Networks with Guaranteed Generalizability:
One-hidden-layer Case [93.37576644429578]
Graph neural networks (GNNs) have made great progress recently on learning from graph-structured data in practice.
We provide a theoretically-grounded generalizability analysis of GNNs with one hidden layer for both regression and binary classification problems.
arXiv Detail & Related papers (2020-06-25T00:45:52Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z) - Neural Bipartite Matching [19.600193617583955]
This report describes how neural execution is applied to a complex algorithm.
It is achieved via neural execution based only on features generated from a single GNN.
The evaluation shows strongly generalising results with the network achieving optimal matching almost 100% of the time.
arXiv Detail & Related papers (2020-05-22T17:50:38Z) - Evaluating Logical Generalization in Graph Neural Networks [59.70452462833374]
We study the task of logical generalization using graph neural networks (GNNs)
Our benchmark suite, GraphLog, requires that learning algorithms perform rule induction in different synthetic logics.
We find that the ability for models to generalize and adapt is strongly determined by the diversity of the logical rules they encounter during training.
arXiv Detail & Related papers (2020-03-14T05:45:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.