R-GCN: The R Could Stand for Random
- URL: http://arxiv.org/abs/2203.02424v1
- Date: Fri, 4 Mar 2022 16:55:25 GMT
- Title: R-GCN: The R Could Stand for Random
- Authors: Vic Degraeve, Gilles Vandewiele, Femke Ongenae, Sofie Van Hoecke
- Abstract summary: "Random Convolutional Network" (RR-GCN) constructs embeddings for nodes on Knowledge Graphs.
We show that RR-GCNs can compete with fully trained R-GCNs in both node classification and link prediction settings.
- Score: 2.221251076371994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The inception of Relational Graph Convolutional Networks (R-GCNs) marked a
milestone in the Semantic Web domain as it allows for end-to-end training of
machine learning models that operate on Knowledge Graphs (KGs). R-GCNs generate
a representation for a node of interest by repeatedly aggregating parametrised,
relation-specific transformations of its neighbours. However, in this paper, we
argue that the the R-GCN's main contribution lies in this "message passing"
paradigm, rather than the learned parameters. To this end, we introduce the
"Random Relational Graph Convolutional Network" (RR-GCN), which constructs
embeddings for nodes in the KG by aggregating randomly transformed random
information from neigbours, i.e., with no learned parameters. We empirically
show that RR-GCNs can compete with fully trained R-GCNs in both node
classification and link prediction settings. The implications of these results
are two-fold: on the one hand, our technique can be used as a quick baseline
that novel KG embedding methods should be able to beat. On the other hand, it
demonstrates that further research might reveal more parameter-efficient
inductive biases for KGs.
Related papers
- Relational Graph Convolutional Networks Do Not Learn Sound Rules [13.66949379381985]
Graph neural networks (GNNs) are frequently used to predict missing facts in knowledge graphs (KGs)
Recent work has aimed to explain their predictions using Datalog, a widely used logic-based formalism.
We consider one of the most popular GNN architectures for KGs, R-GCN, and we provide two methods to extract rules that explain its predictions and are sound.
arXiv Detail & Related papers (2024-08-14T15:46:42Z) - Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - A Variational Edge Partition Model for Supervised Graph Representation
Learning [51.30365677476971]
This paper introduces a graph generative process to model how the observed edges are generated by aggregating the node interactions over a set of overlapping node communities.
We partition each edge into the summation of multiple community-specific weighted edges and use them to define community-specific GNNs.
A variational inference framework is proposed to jointly learn a GNN based inference network that partitions the edges into different communities, these community-specific GNNs, and a GNN based predictor that combines community-specific GNNs for the end classification task.
arXiv Detail & Related papers (2022-02-07T14:37:50Z) - Relational Graph Convolutional Networks: A Closer Look [1.8428580623654864]
We describe a reproduction of the Graph Convolutional Network (RGCN)
Using our reproduction, we explain the intuition behind the model.
Our results empirically validate the correctness of our implementations.
arXiv Detail & Related papers (2021-07-21T11:25:11Z) - Action Recognition with Kernel-based Graph Convolutional Networks [14.924672048447338]
Learning graph convolutional networks (GCNs) aims at generalizing deep learning to arbitrary non-regular domains.
We introduce a novel GCN framework that achieves spatial graph convolution in a reproducing kernel Hilbert space (RKHS)
The particularity of our GCN model also resides in its ability to achieve convolutions without explicitly realigning nodes in the receptive fields of the learned graph filters with those of the input graphs.
arXiv Detail & Related papers (2020-12-28T11:02:51Z) - On the Equivalence of Decoupled Graph Convolution Network and Label
Propagation [60.34028546202372]
Some work shows that coupling is inferior to decoupling, which supports deep graph propagation better.
Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood.
We propose a new label propagation method named propagation then training Adaptively (PTA), which overcomes the flaws of the decoupled GCN.
arXiv Detail & Related papers (2020-10-23T13:57:39Z) - Bi-GCN: Binary Graph Convolutional Network [57.733849700089955]
We propose a Binary Graph Convolutional Network (Bi-GCN), which binarizes both the network parameters and input node features.
Our Bi-GCN can reduce the memory consumption by an average of 30x for both the network parameters and input data, and accelerate the inference speed by an average of 47x.
arXiv Detail & Related papers (2020-10-15T07:26:23Z) - Lightweight, Dynamic Graph Convolutional Networks for AMR-to-Text
Generation [56.73834525802723]
Lightweight Dynamic Graph Convolutional Networks (LDGCNs) are proposed.
LDGCNs capture richer non-local interactions by synthesizing higher order information from the input graphs.
We develop two novel parameter saving strategies based on the group graph convolutions and weight tied convolutions to reduce memory usage and model complexity.
arXiv Detail & Related papers (2020-10-09T06:03:46Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Adaptive Propagation Graph Convolutional Network [17.41698818541144]
Graph convolutional networks (GCNs) are a family of neural network models that perform inference on graph data.
We show that state-of-the-art results can be achieved by adapting the number of communication steps independently at every node.
We show that the proposed adaptive propagation GCN (AP-GCN) achieves superior or similar results to the best proposed models.
arXiv Detail & Related papers (2020-02-24T15:31:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.