Building powerful and equivariant graph neural networks with structural
message-passing
- URL: http://arxiv.org/abs/2006.15107v3
- Date: Fri, 23 Oct 2020 12:03:42 GMT
- Title: Building powerful and equivariant graph neural networks with structural
message-passing
- Authors: Clement Vignac, Andreas Loukas, Pascal Frossard
- Abstract summary: We propose a powerful and equivariant message-passing framework based on two ideas.
First, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.
Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance.
- Score: 74.93169425144755
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Message-passing has proved to be an effective way to design graph neural
networks, as it is able to leverage both permutation equivariance and an
inductive bias towards learning local structures in order to achieve good
generalization. However, current message-passing architectures have a limited
representation power and fail to learn basic topological properties of graphs.
We address this problem and propose a powerful and equivariant message-passing
framework based on two ideas: first, we propagate a one-hot encoding of the
nodes, in addition to the features, in order to learn a local context matrix
around each node. This matrix contains rich local information about both
features and topology and can eventually be pooled to build node
representations. Second, we propose methods for the parametrization of the
message and update functions that ensure permutation equivariance. Having a
representation that is independent of the specific choice of the one-hot
encoding permits inductive reasoning and leads to better generalization
properties. Experimentally, our model can predict various graph topological
properties on synthetic data more accurately than previous methods and achieves
state-of-the-art results on molecular graph regression on the ZINC dataset.
Related papers
- Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Local Permutation Equivariance For Graph Neural Networks [2.208242292882514]
We develop a new method, named locally permutation-equivariant graph neural networks.
It provides a framework for building graph neural networks that operate on local node neighbourhoods.
We experimentally validate the method on a range of graph benchmark classification tasks.
arXiv Detail & Related papers (2021-11-23T13:10:34Z) - Learning to Learn Graph Topologies [27.782971146122218]
We learn a mapping from node data to the graph structure based on the idea of learning to optimise (L2O)
The model is trained in an end-to-end fashion with pairs of node data and graph samples.
Experiments on both synthetic and real-world data demonstrate that our model is more efficient than classic iterative algorithms in learning a graph with specific topological properties.
arXiv Detail & Related papers (2021-10-19T08:42:38Z) - GraphiT: Encoding Graph Structure in Transformers [37.33808493548781]
We show that viewing graphs as sets of node features and structural and positional information is able to outperform representations learned with classical graph neural networks (GNNs)
Our model, GraphiT, encodes such information by (i) leveraging relative positional encoding strategies in self-attention scores based on positive definite kernels on graphs, and (ii) enumerating and encoding local sub-structures such as paths of short length.
arXiv Detail & Related papers (2021-06-10T11:36:22Z) - Symmetry-driven graph neural networks [1.713291434132985]
We introduce two graph network architectures that are equivariant to several types of transformations affecting the node coordinates.
We demonstrate these capabilities on a synthetic dataset composed of $n$-dimensional geometric objects.
arXiv Detail & Related papers (2021-05-28T18:54:12Z) - Self-Supervised Graph Representation Learning via Topology
Transformations [61.870882736758624]
We present the Topology Transformation Equivariant Representation learning, a general paradigm of self-supervised learning for node representations of graph data.
In experiments, we apply the proposed model to the downstream node and graph classification tasks, and results show that the proposed method outperforms the state-of-the-art unsupervised approaches.
arXiv Detail & Related papers (2021-05-25T06:11:03Z) - GraphFormers: GNN-nested Transformers for Representation Learning on
Textual Graph [53.70520466556453]
We propose GraphFormers, where layerwise GNN components are nested alongside the transformer blocks of language models.
With the proposed architecture, the text encoding and the graph aggregation are fused into an iterative workflow.
In addition, a progressive learning strategy is introduced, where the model is successively trained on manipulated data and original data to reinforce its capability of integrating information on graph.
arXiv Detail & Related papers (2021-05-06T12:20:41Z) - Topological Regularization for Graph Neural Networks Augmentation [12.190045459064413]
We propose a feature augmentation method for graph nodes based on topological regularization.
We have carried out extensive experiments on a large number of datasets to prove the effectiveness of our model.
arXiv Detail & Related papers (2021-04-03T01:37:44Z) - Permutation-equivariant and Proximity-aware Graph Neural Networks with
Stochastic Message Passing [88.30867628592112]
Graph neural networks (GNNs) are emerging machine learning models on graphs.
Permutation-equivariance and proximity-awareness are two important properties highly desirable for GNNs.
We show that existing GNNs, mostly based on the message-passing mechanism, cannot simultaneously preserve the two properties.
In order to preserve node proximities, we augment the existing GNNs with node representations.
arXiv Detail & Related papers (2020-09-05T16:46:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.