Recurrent Graph Neural Networks and Arithmetic Circuits
- URL: http://arxiv.org/abs/2603.05140v1
- Date: Thu, 05 Mar 2026 13:10:27 GMT
- Title: Recurrent Graph Neural Networks and Arithmetic Circuits
- Authors: Timon Barlag, Vivian Holzapfel, Laura Strieker, Jonni Virtema, Heribert Vollmer,
- Abstract summary: We characterise the computational power of recurrent graph neural networks (GNNs) in terms of arithmetic circuits over the real numbers.<n>We introduce the model of recurrent arithmetic circuits, which can be seen as arithmetic analogues of sequential or logical circuits.
- Score: 3.996231905922229
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We characterise the computational power of recurrent graph neural networks (GNNs) in terms of arithmetic circuits over the real numbers. Our networks are not restricted to aggregate-combine GNNs or other particular types. Generalizing similar notions from the literature, we introduce the model of recurrent arithmetic circuits, which can be seen as arithmetic analogues of sequential or logical circuits. These circuits utilise so-called memory gates which are used to store data between iterations of the recurrent circuit. While (recurrent) GNNs work on labelled graphs, we construct arithmetic circuits that obtain encoded labelled graphs as real valued tuples and then compute the same function. For the other direction we construct recurrent GNNs which are able to simulate the computations of recurrent circuits. These GNNs are given the circuit-input as initial feature vectors and then, after the GNN-computation, have the circuit-output among the feature vectors of its nodes. In this way we establish an exact correspondence between the expressivity of recurrent GNNs and recurrent arithmetic circuits operating over real numbers.
Related papers
- MINAR: Mechanistic Interpretability for Neural Algorithmic Reasoning [17.920835259832238]
We introduce Mechanistic Interpretability for Neural Algorithmic Reasoning (MINAR)<n>MINAR adapts attribution patching methods from mechanistic interpretability to the GNN setting.<n>We show through two case studies that MIAR recovers faithful neuron-level circuits from GNNs trained on algorithmic tasks.
arXiv Detail & Related papers (2026-02-24T23:38:06Z) - Quantifying The Limits of AI Reasoning: Systematic Neural Network Representations of Algorithms [10.292476979020522]
We present a systematic meta-algorithm that converts essentially any circuit into a feedforward neural network (NN)<n>We show that, on any digital computer, our construction emulates the circuit exactly--no approximation, no rounding, modular overflow included--demonstrating that no reasoning task lies beyond the reach of neural networks.
arXiv Detail & Related papers (2025-08-25T21:55:37Z) - Graph Neural Networks and Arithmetic Circuits [0.0]
We characterize the computational power of neural networks that follow the graph neural network architecture.
We establish an exact correspondence between the expressivity of GNNs using diverse activation functions and arithmetic over real numbers.
arXiv Detail & Related papers (2024-02-27T11:04:06Z) - CktGNN: Circuit Graph Neural Network for Electronic Design Automation [67.29634073660239]
This paper presents a Circuit Graph Neural Network (CktGNN) that simultaneously automates the circuit topology generation and device sizing.
We introduce Open Circuit Benchmark (OCB), an open-sourced dataset that contains $10$K distinct operational amplifiers.
Our work paves the way toward a learning-based open-sourced design automation for analog circuits.
arXiv Detail & Related papers (2023-08-31T02:20:25Z) - The Descriptive Complexity of Graph Neural Networks [2.6728900378310514]
We show that graph queries can be computed by a bounded-size family of graph neural networks (GNNs)<n>GNNs are definable in the guarded fragment of first-order logic with counting and with built-in relations.<n>We show that queries computable by a single GNN with piecewise linear activations and rational weights are definable in GFO+C without built-in relations.
arXiv Detail & Related papers (2023-03-08T14:32:59Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - VersaGNN: a Versatile accelerator for Graph neural networks [81.1667080640009]
We propose textitVersaGNN, an ultra-efficient, systolic-array-based versatile hardware accelerator.
textitVersaGNN achieves on average 3712$times$ speedup with 1301.25$times$ energy reduction on CPU, and 35.4$times$ speedup with 17.66$times$ energy reduction on GPU.
arXiv Detail & Related papers (2021-05-04T04:10:48Z) - Learning to Execute Programs with Instruction Pointer Attention Graph
Neural Networks [55.98291376393561]
Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks.
Recurrent neural networks (RNNs) are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure.
We introduce a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which improves systematic generalization on the task of learning to execute programs.
arXiv Detail & Related papers (2020-10-23T19:12:30Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.