P-tensors: a General Formalism for Constructing Higher Order Message
Passing Networks
- URL: http://arxiv.org/abs/2306.10767v1
- Date: Mon, 19 Jun 2023 08:21:30 GMT
- Title: P-tensors: a General Formalism for Constructing Higher Order Message
Passing Networks
- Authors: Tianyi Sun, Andrew Hands, Risi Kondor
- Abstract summary: We show that higher order graph neural networks can achieve better accuracy than their standard message passing counterparts.
We formalize these structures as permutation equivariant tensors, or P-tensors, and derive a basis for all linear maps between arbitrary order equivariant P-tensors.
- Score: 5.257115841810258
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Several recent papers have recently shown that higher order graph neural
networks can achieve better accuracy than their standard message passing
counterparts, especially on highly structured graphs such as molecules. These
models typically work by considering higher order representations of subgraphs
contained within a given graph and then perform some linear maps between them.
We formalize these structures as permutation equivariant tensors, or P-tensors,
and derive a basis for all linear maps between arbitrary order equivariant
P-tensors. Experimentally, we demonstrate this paradigm achieves state of the
art performance on several benchmark datasets.
Related papers
- Homomorphism Counts for Graph Neural Networks: All About That Basis [8.25219440625445]
We argue for a more fine-grained approach, which incorporates the homomorphism counts of all structures in the basis'' of the target pattern.
This yields strictly more expressive architectures without incurring any additional overhead in terms of computational complexity.
arXiv Detail & Related papers (2024-02-13T16:57:06Z) - Sheaf Hypergraph Networks [10.785525697855498]
We introduce cellular sheaves for hypergraphs, a mathematical construction that adds extra structure to the conventional hypergraph.
Drawing inspiration from existing Laplacians in the literature, we develop two unique formulations of sheaf hypergraph Laplacians.
We employ these sheaf hypergraph Laplacians to design two categories of models: Sheaf Hypergraph Neural Networks and Sheaf Hypergraph Convolutional Networks.
arXiv Detail & Related papers (2023-09-29T10:25:43Z) - From Hypergraph Energy Functions to Hypergraph Neural Networks [94.88564151540459]
We present an expressive family of parameterized, hypergraph-regularized energy functions.
We then demonstrate how minimizers of these energies effectively serve as node embeddings.
We draw parallels between the proposed bilevel hypergraph optimization, and existing GNN architectures in common use.
arXiv Detail & Related papers (2023-06-16T04:40:59Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - G-MSM: Unsupervised Multi-Shape Matching with Graph-based Affinity
Priors [52.646396621449]
G-MSM is a novel unsupervised learning approach for non-rigid shape correspondence.
We construct an affinity graph on a given set of training shapes in a self-supervised manner.
We demonstrate state-of-the-art performance on several recent shape correspondence benchmarks.
arXiv Detail & Related papers (2022-12-06T12:09:24Z) - Ordered Subgraph Aggregation Networks [19.18478955240166]
Subgraph-enhanced graph neural networks (GNNs) have emerged, provably boosting the expressive power of standard (message-passing) GNNs.
Here, we introduce a theoretical framework and extend the known expressivity results of subgraph-enhanced GNNs.
We show that increasing subgraph size always increases the expressive power and develop a better understanding of their limitations.
arXiv Detail & Related papers (2022-06-22T15:19:34Z) - Template based Graph Neural Network with Optimal Transport Distances [11.56532171513328]
Current Graph Neural Networks (GNN) architectures rely on two important components: node features embedding through message passing, and aggregation with a specialized form of pooling.
We propose in this work a novel point of view, which places distances to some learnable graph templates at the core of the graph representation.
This distance embedding is constructed thanks to an optimal transport distance: the Fused Gromov-Wasserstein (FGW) distance.
arXiv Detail & Related papers (2022-05-31T12:24:01Z) - Simple Graph Convolutional Networks [72.92604941595019]
We propose simple graph convolution operators, that can be implemented in single-layer graph convolutional networks.
We show that our convolution operators are more theoretically grounded than many proposals in literature, and exhibit state-of-the-art predictive performance on the considered benchmark datasets.
arXiv Detail & Related papers (2021-06-10T15:23:59Z) - Learning over Families of Sets -- Hypergraph Representation Learning for
Higher Order Tasks [12.28143554382742]
We develop a hypergraph neural network to learn provably expressive representations of variable sized hyperedges.
We evaluate performance on multiple real-world hypergraph datasets and demonstrate consistent, significant improvement in accuracy, over state-of-the-art models.
arXiv Detail & Related papers (2021-01-19T18:37:50Z) - Building powerful and equivariant graph neural networks with structural
message-passing [74.93169425144755]
We propose a powerful and equivariant message-passing framework based on two ideas.
First, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.
Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance.
arXiv Detail & Related papers (2020-06-26T17:15:16Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.