Homomorphism Counts as Structural Encodings for Graph Learning
- URL: http://arxiv.org/abs/2410.18676v1
- Date: Thu, 24 Oct 2024 12:09:01 GMT
- Title: Homomorphism Counts as Structural Encodings for Graph Learning
- Authors: Linus Bao, Emily Jin, Michael Bronstein, İsmail İlkan Ceylan, Matthias Lanzinger,
- Abstract summary: Graph Transformers are popular neural networks that extend the well-known Transformer architecture to the graph domain.
We propose $textitmotif structural encoding$ (MoSE) as a flexible and powerful structural encoding framework based on counting graph homomorphisms.
- Score: 7.691872607259055
- License:
- Abstract: Graph Transformers are popular neural networks that extend the well-known Transformer architecture to the graph domain. These architectures operate by applying self-attention on graph nodes and incorporating graph structure through the use of positional encodings (e.g., Laplacian positional encoding) or structural encodings (e.g., random-walk structural encoding). The quality of such encodings is critical, since they provide the necessary $\textit{graph inductive biases}$ to condition the model on graph structure. In this work, we propose $\textit{motif structural encoding}$ (MoSE) as a flexible and powerful structural encoding framework based on counting graph homomorphisms. Theoretically, we compare the expressive power of MoSE to random-walk structural encoding and relate both encodings to the expressive power of standard message passing neural networks. Empirically, we observe that MoSE outperforms other well-known positional and structural encodings across a range of architectures, and it achieves state-of-the-art performance on widely studied molecular property prediction datasets.
Related papers
- Graph Transformers without Positional Encodings [0.7252027234425334]
We introduce Eigenformer, a Graph Transformer employing a novel spectrum-aware attention mechanism cognizant of the Laplacian spectrum of the graph.
We empirically show that it achieves performance competetive with SOTA Graph Transformers on a number of standard GNN benchmarks.
arXiv Detail & Related papers (2024-01-31T12:33:31Z) - Neural Machine Translation with Dynamic Graph Convolutional Decoder [32.462919670070654]
We propose an end-to-end translation architecture from the (graph & sequence) structural inputs to the (graph & sequence) outputs, where the target translation and its corresponding syntactic graph are jointly modeled and generated.
We conduct extensive experiments on five widely acknowledged translation benchmarks, verifying our proposal achieves consistent improvements over baselines and other syntax-aware variants.
arXiv Detail & Related papers (2023-05-28T11:58:07Z) - SE-GSL: A General and Effective Graph Structure Learning Framework
through Structural Entropy Optimization [67.28453445927825]
Graph Neural Networks (GNNs) are de facto solutions to structural data learning.
Existing graph structure learning (GSL) frameworks still lack robustness and interpretability.
This paper proposes a general GSL framework, SE-GSL, through structural entropy and the graph hierarchy abstracted in the encoding tree.
arXiv Detail & Related papers (2023-03-17T05:20:24Z) - Structure-Aware Transformer for Graph Representation Learning [7.4124458942877105]
We show that node representations generated by the Transformer with positional encoding do not necessarily capture structural similarity between them.
We propose the Structure-Aware Transformer, a class of simple and flexible graph transformers built upon a new self-attention mechanism.
Our framework can leverage any existing GNN to extract the subgraph representation, and we show that it systematically improves performance relative to the base GNN model.
arXiv Detail & Related papers (2022-02-07T09:53:39Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Equivariant Subgraph Aggregation Networks [23.26140936226352]
This paper proposes a novel framework called Equivariant Subgraph Aggregation Networks (ESAN) to address this issue.
While two graphs may not be distinguishable by an MPNN, they often contain distinguishable subgraphs.
We develop novel variants of the 1-dimensional Weisfeiler-Leman (1-WL) test for graph isomorphism, and prove lower bounds on the expressiveness of ESAN.
We provide theoretical results that describe how design choices such as the subgraph selection policy and equivariant neural architecture affect our architecture's expressive power.
arXiv Detail & Related papers (2021-10-06T16:45:07Z) - GraphiT: Encoding Graph Structure in Transformers [37.33808493548781]
We show that viewing graphs as sets of node features and structural and positional information is able to outperform representations learned with classical graph neural networks (GNNs)
Our model, GraphiT, encodes such information by (i) leveraging relative positional encoding strategies in self-attention scores based on positive definite kernels on graphs, and (ii) enumerating and encoding local sub-structures such as paths of short length.
arXiv Detail & Related papers (2021-06-10T11:36:22Z) - Do Transformers Really Perform Bad for Graph Representation? [62.68420868623308]
We present Graphormer, which is built upon the standard Transformer architecture.
Our key insight to utilizing Transformer in the graph is the necessity of effectively encoding the structural information of a graph into the model.
arXiv Detail & Related papers (2021-06-09T17:18:52Z) - Promoting Graph Awareness in Linearized Graph-to-Text Generation [72.83863719868364]
We study the ability of linearized models to encode local graph structures.
Our findings motivate solutions to enrich the quality of models' implicit graph encodings.
We find that these denoising scaffolds lead to substantial improvements in downstream generation in low-resource settings.
arXiv Detail & Related papers (2020-12-31T18:17:57Z) - Building powerful and equivariant graph neural networks with structural
message-passing [74.93169425144755]
We propose a powerful and equivariant message-passing framework based on two ideas.
First, we propagate a one-hot encoding of the nodes, in addition to the features, in order to learn a local context matrix around each node.
Second, we propose methods for the parametrization of the message and update functions that ensure permutation equivariance.
arXiv Detail & Related papers (2020-06-26T17:15:16Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.