Graph Convolutional Neural Networks as Parametric CoKleisli morphisms
- URL: http://arxiv.org/abs/2212.00542v1
- Date: Thu, 1 Dec 2022 14:49:58 GMT
- Title: Graph Convolutional Neural Networks as Parametric CoKleisli morphisms
- Authors: Bruno Gavranovi\'c, Mattia Villani
- Abstract summary: We define the bicategory of Graph Convolutional Neural Networks $mathbfGCNN_n$ for an arbitrary graph with $n$ nodes.
We show it can be factored through the already existing categorical constructions for deep learning called $mathbfPara$ and $mathbfLens$ with the base category set to the CoKleisli category of the product comonad.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We define the bicategory of Graph Convolutional Neural Networks
$\mathbf{GCNN}_n$ for an arbitrary graph with $n$ nodes. We show it can be
factored through the already existing categorical constructions for deep
learning called $\mathbf{Para}$ and $\mathbf{Lens}$ with the base category set
to the CoKleisli category of the product comonad. We prove that there exists an
injective-on-objects, faithful 2-functor $\mathbf{GCNN}_n \to
\mathbf{Para}(\mathsf{CoKl}(\mathbb{R}^{n \times n} \times -))$. We show that
this construction allows us to treat the adjacency matrix of a GCNN as a global
parameter instead of a a local, layer-wise one. This gives us a high-level
categorical characterisation of a particular kind of inductive bias GCNNs
possess. Lastly, we hypothesize about possible generalisations of GCNNs to
general message-passing graph neural networks, connections to equivariant
learning, and the (lack of) functoriality of activation functions.
Related papers
- Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Learning Hierarchical Polynomials with Three-Layer Neural Networks [56.71223169861528]
We study the problem of learning hierarchical functions over the standard Gaussian distribution with three-layer neural networks.
For a large subclass of degree $k$s $p$, a three-layer neural network trained via layerwise gradientp descent on the square loss learns the target $h$ up to vanishing test error.
This work demonstrates the ability of three-layer neural networks to learn complex features and as a result, learn a broad class of hierarchical functions.
arXiv Detail & Related papers (2023-11-23T02:19:32Z) - Calibrate and Boost Logical Expressiveness of GNN Over Multi-Relational
and Temporal Graphs [8.095679736030146]
We investigate $mathcalFOC$NN, a fragment of first-order logic with two variables and counting quantifiers.
We propose a simple graph transformation technique, akin to a preprocessing step, which can be executed in linear time.
Our results consistently demonstrate that R$2$-GNN with the graph transformation outperforms the baseline methods on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-11-03T00:33:24Z) - GC-Flow: A Graph-Based Flow Network for Effective Clustering [10.354035049272095]
Graph convolutional networks (GCNs) are emphdiscriminative models that directly model the class posterior $p(y|mathbfx)$ for semi-supervised classification of graph data.
In this work, we design normalizing flows that replace GCN layers, leading to a emphgenerative model that models both the class conditional likelihood $p(mathbfx|y)$ and the class prior $p(y)$.
The resulting neural network, GC-Flow, retains the graph convolution operations while being equipped with
arXiv Detail & Related papers (2023-05-26T22:11:38Z) - Exponentially Improving the Complexity of Simulating the
Weisfeiler-Lehman Test with Graph Neural Networks [22.36443060418036]
We show that the expressive power of Graph Neural Networks (GNNs) in distinguishing non-isomorphic graphs is exactly the same as that of the Weisfeiler-Lehman graph test.
We present an improved simulation of the WL test on GNNs with emphexponentially lower complexity.
arXiv Detail & Related papers (2022-11-06T22:38:49Z) - Graph Neural Network Bandits [89.31889875864599]
We consider the bandit optimization problem with the reward function defined over graph-structured data.
Key challenges in this setting are scaling to large domains, and to graphs with many nodes.
We show that graph neural networks (GNNs) can be used to estimate the reward function.
arXiv Detail & Related papers (2022-07-13T18:12:36Z) - AnchorGAE: General Data Clustering via $O(n)$ Bipartite Graph
Convolution [79.44066256794187]
We show how to convert a non-graph dataset into a graph by introducing the generative graph model, which is used to build graph convolution networks (GCNs)
A bipartite graph constructed by anchors is updated dynamically to exploit the high-level information behind data.
We theoretically prove that the simple update will lead to degeneration and a specific strategy is accordingly designed.
arXiv Detail & Related papers (2021-11-12T07:08:13Z) - Graph Neural Networks with Local Graph Parameters [1.8600631687568656]
Local graph parameters can be added to any Graph Neural Networks (GNNs) architecture.
Our results connect GNNs with deep results in finite model theory and finite variable logics.
arXiv Detail & Related papers (2021-06-12T07:43:51Z) - Homogeneous vector bundles and $G$-equivariant convolutional neural
networks [0.0]
$G$-equivariant convolutional neural networks (GCNNs) are a geometric deep learning model for data defined on a homogeneous $G$-space $mathcalM$.
In this paper, we analyze GCNNs on homogeneous spaces $mathcalM = G/K$ in the case of unimodular Lie groups $G$ and compact subgroups $K leq G$.
arXiv Detail & Related papers (2021-05-12T02:06:04Z) - Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK [58.5766737343951]
We consider the dynamic of descent for learning a two-layer neural network.
We show that an over-parametrized two-layer neural network can provably learn with gradient loss at most ground with Tangent samples.
arXiv Detail & Related papers (2020-07-09T07:09:28Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.