Learning on Arbitrary Graph Topologies via Predictive Coding
- URL: http://arxiv.org/abs/2201.13180v1
- Date: Mon, 31 Jan 2022 12:43:22 GMT
- Title: Learning on Arbitrary Graph Topologies via Predictive Coding
- Authors: Tommaso Salvatori, Luca Pinchetti, Beren Millidge, Yuhang Song, Rafal
Bogacz, Thomas Lukasiewicz
- Abstract summary: We show how predictive coding can be used to perform inference and learning on arbitrary graph topologies.
We experimentally show how this formulation, called PC graphs, can be used to flexibly perform different tasks with the same network.
- Score: 38.761663028090204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training with backpropagation (BP) in standard deep learning consists of two
main steps: a forward pass that maps a data point to its prediction, and a
backward pass that propagates the error of this prediction back through the
network. This process is highly effective when the goal is to minimize a
specific objective function. However, it does not allow training on networks
with cyclic or backward connections. This is an obstacle to reaching brain-like
capabilities, as the highly complex heterarchical structure of the neural
connections in the neocortex are potentially fundamental for its effectiveness.
In this paper, we show how predictive coding (PC), a theory of information
processing in the cortex, can be used to perform inference and learning on
arbitrary graph topologies. We experimentally show how this formulation, called
PC graphs, can be used to flexibly perform different tasks with the same
network by simply stimulating specific neurons, and investigate how the
topology of the graph influences the final performance. We conclude by
comparing against simple baselines trained~with~BP.
Related papers
- Graph Neural Networks Go Forward-Forward [0.0]
We present the Graph Forward-Forward (GFF) algorithm, an extension of the Forward-Forward procedure to graphs.
Our method is to the message-passing scheme, and provides a more biologically plausible learning scheme than backpropagation.
We run experiments on 11 standard graph property prediction tasks, showing how GFF provides an effective alternative to backpropagation.
arXiv Detail & Related papers (2023-02-10T14:45:36Z) - Machine learning of percolation models using graph convolutional neural
networks [1.0499611180329804]
Prediction of percolation thresholds with machine learning methods remains challenging.
We build a powerful graph convolutional neural network to study the percolation in both supervised and unsupervised ways.
arXiv Detail & Related papers (2022-07-07T15:17:40Z) - Invertible Neural Networks for Graph Prediction [22.140275054568985]
In this work, we address conditional generation using deep invertible neural networks.
We adopt an end-to-end training approach since our objective is to address prediction and generation in the forward and backward processes at once.
arXiv Detail & Related papers (2022-06-02T17:28:33Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Predictive Coding: Towards a Future of Deep Learning beyond
Backpropagation? [41.58529335439799]
The backpropagation of error algorithm used to train deep neural networks has been fundamental to the successes of deep learning.
Recent work has developed the idea into a general-purpose algorithm able to train neural networks using only local computations.
We show the substantially greater flexibility of predictive coding networks against equivalent deep neural networks.
arXiv Detail & Related papers (2022-02-18T22:57:03Z) - Training Graph Neural Networks by Graphon Estimation [2.5997274006052544]
We propose to train a graph neural network via resampling from a graphon estimate obtained from the underlying network data.
We show that our approach is competitive with and in many cases outperform the other over-smoothing reducing GNN training methods.
arXiv Detail & Related papers (2021-09-04T19:21:48Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Belief Propagation Reloaded: Learning BP-Layers for Labeling Problems [83.98774574197613]
We take one of the simplest inference methods, a truncated max-product Belief propagation, and add what is necessary to make it a proper component of a deep learning model.
This BP-Layer can be used as the final or an intermediate block in convolutional neural networks (CNNs)
The model is applicable to a range of dense prediction problems, is well-trainable and provides parameter-efficient and robust solutions in stereo, optical flow and semantic segmentation.
arXiv Detail & Related papers (2020-03-13T13:11:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.