Weisfeiler and Leman Go Relational
- URL: http://arxiv.org/abs/2211.17113v1
- Date: Wed, 30 Nov 2022 15:56:46 GMT
- Title: Weisfeiler and Leman Go Relational
- Authors: Pablo Barcelo, Mikhail Galkin, Christopher Morris, Miguel Romero Orth
- Abstract summary: We investigate the limitations in the expressive power of the well-known GCN and Composition GCN architectures.
We introduce the $k$-RN architecture that provably overcomes the limitations of the above two architectures.
- Score: 4.29881872550313
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graphs, modeling multi-relational data, improve numerous
applications such as question answering or graph logical reasoning. Many graph
neural networks for such data emerged recently, often outperforming shallow
architectures. However, the design of such multi-relational graph neural
networks is ad-hoc, driven mainly by intuition and empirical insights. Up to
now, their expressivity, their relation to each other, and their (practical)
learning performance is poorly understood. Here, we initiate the study of
deriving a more principled understanding of multi-relational graph neural
networks. Namely, we investigate the limitations in the expressive power of the
well-known Relational GCN and Compositional GCN architectures and shed some
light on their practical learning performance. By aligning both architectures
with a suitable version of the Weisfeiler-Leman test, we establish under which
conditions both models have the same expressive power in distinguishing
non-isomorphic (multi-relational) graphs or vertices with different structural
roles. Further, by leveraging recent progress in designing expressive graph
neural networks, we introduce the $k$-RN architecture that provably overcomes
the expressiveness limitations of the above two architectures. Empirically, we
confirm our theoretical findings in a vertex classification setting over small
and large multi-relational graphs.
Related papers
- Foundations and Frontiers of Graph Learning Theory [81.39078977407719]
Recent advancements in graph learning have revolutionized the way to understand and analyze data with complex structures.
Graph Neural Networks (GNNs), i.e. neural network architectures designed for learning graph representations, have become a popular paradigm.
This article provides a comprehensive summary of the theoretical foundations and breakthroughs concerning the approximation and learning behaviors intrinsic to prevalent graph learning models.
arXiv Detail & Related papers (2024-07-03T14:07:41Z) - Unsupervised Graph Neural Architecture Search with Disentangled
Self-supervision [51.88848982611515]
Unsupervised graph neural architecture search remains unexplored in the literature.
We propose a novel Disentangled Self-supervised Graph Neural Architecture Search model.
Our model is able to achieve state-of-the-art performance against several baseline methods in an unsupervised manner.
arXiv Detail & Related papers (2024-03-08T05:23:55Z) - When Representations Align: Universality in Representation Learning Dynamics [8.188549368578704]
We derive an effective theory of representation learning under the assumption that the encoding map from input to hidden representation and the decoding map from representation to output are arbitrary smooth functions.
We show through experiments that the effective theory describes aspects of representation learning dynamics across a range of deep networks with different activation functions and architectures.
arXiv Detail & Related papers (2024-02-14T12:48:17Z) - Neural Architecture Retrieval [27.063268631346713]
We define a new problem Neural Architecture Retrieval which retrieves a set of existing neural architectures with similar designs to the query neural architecture.
Existing graph pre-training strategies cannot address the computational graph in neural architectures due to the graph size and motifs.
We introduce multi-level contrastive learning to achieve accurate graph representation learning.
arXiv Detail & Related papers (2023-07-16T01:56:41Z) - A Theory of Link Prediction via Relational Weisfeiler-Leman on Knowledge
Graphs [6.379544211152605]
Graph neural networks are prominent models for representation learning over graph-structured data.
Our goal is to provide a systematic understanding of the landscape of graph neural networks for knowledge graphs.
arXiv Detail & Related papers (2023-02-04T17:40:03Z) - Convolutional Learning on Multigraphs [153.20329791008095]
We develop convolutional information processing on multigraphs and introduce convolutional multigraph neural networks (MGNNs)
To capture the complex dynamics of information diffusion within and across each of the multigraph's classes of edges, we formalize a convolutional signal processing model.
We develop a multigraph learning architecture, including a sampling procedure to reduce computational complexity.
The introduced architecture is applied towards optimal wireless resource allocation and a hate speech localization task, offering improved performance over traditional graph neural networks.
arXiv Detail & Related papers (2022-09-23T00:33:04Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Geometrically Principled Connections in Graph Neural Networks [66.51286736506658]
We argue geometry should remain the primary driving force behind innovation in the emerging field of geometric deep learning.
We relate graph neural networks to widely successful computer graphics and data approximation models: radial basis functions (RBFs)
We introduce affine skip connections, a novel building block formed by combining a fully connected layer with any graph convolution operator.
arXiv Detail & Related papers (2020-04-06T13:25:46Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.