Local Permutation Equivariance For Graph Neural Networks
- URL: http://arxiv.org/abs/2111.11840v1
- Date: Tue, 23 Nov 2021 13:10:34 GMT
- Title: Local Permutation Equivariance For Graph Neural Networks
- Authors: Joshua Mitton, Roderick Murray-Smith
- Abstract summary: We develop a new method, named locally permutation-equivariant graph neural networks.
It provides a framework for building graph neural networks that operate on local node neighbourhoods.
We experimentally validate the method on a range of graph benchmark classification tasks.
- Score: 2.208242292882514
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work we develop a new method, named locally permutation-equivariant
graph neural networks, which provides a framework for building graph neural
networks that operate on local node neighbourhoods, through sub-graphs, while
using permutation equivariant update functions. Message passing neural networks
have been shown to be limited in their expressive power and recent approaches
to over come this either lack scalability or require structural information to
be encoded into the feature space. The general framework presented here
overcomes the scalability issues associated with global permutation
equivariance by operating on sub-graphs through restricted representations. In
addition, we prove that there is no loss of expressivity by using restricted
representations. Furthermore, the proposed framework only requires a choice of
$k$-hops for creating sub-graphs and a choice of representation space to be
used for each layer, which makes the method easily applicable across a range of
graph based domains. We experimentally validate the method on a range of graph
benchmark classification tasks, demonstrating either state-of-the-art results
or very competitive results on all benchmarks. Further, we demonstrate that the
use of local update functions offers a significant improvement in GPU memory
over global methods.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - Graph Ordering Attention Networks [22.468776559433614]
Graph Neural Networks (GNNs) have been successfully used in many problems involving graph-structured data.
We introduce the Graph Ordering Attention (GOAT) layer, a novel GNN component that captures interactions between nodes in a neighborhood.
GOAT layer demonstrates its increased performance in modeling graph metrics that capture complex information.
arXiv Detail & Related papers (2022-04-11T18:13:19Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - Towards Efficient Scene Understanding via Squeeze Reasoning [71.1139549949694]
We propose a novel framework called Squeeze Reasoning.
Instead of propagating information on the spatial map, we first learn to squeeze the input feature into a channel-wise global vector.
We show that our approach can be modularized as an end-to-end trained block and can be easily plugged into existing networks.
arXiv Detail & Related papers (2020-11-06T12:17:01Z) - Locality Preserving Dense Graph Convolutional Networks with Graph
Context-Aware Node Representations [19.623379678611744]
Graph convolutional networks (GCNs) have been widely used for representation learning on graph data.
In many graph classification applications, GCN-based approaches have outperformed traditional methods.
We propose a locality-preserving dense GCN with graph context-aware node representations.
arXiv Detail & Related papers (2020-10-12T02:12:27Z) - Adversarial Graph Representation Adaptation for Cross-Domain Facial
Expression Recognition [86.25926461936412]
We propose a novel Adrialversa Graph Representation Adaptation (AGRA) framework that unifies graph representation propagation with adversarial learning for cross-domain holistic-local feature co-adaptation.
We conduct extensive and fair experiments on several popular benchmarks and show that the proposed AGRA framework achieves superior performance over previous state-of-the-art methods.
arXiv Detail & Related papers (2020-08-03T13:27:24Z) - Building powerful and equivariant graph neural networks with structural
message-passing [74.93169425144755]
We propose a powerful and equivariant message-passing framework based on two ideas.
First, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.
Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance.
arXiv Detail & Related papers (2020-06-26T17:15:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.