Structural Node Embeddings with Homomorphism Counts
- URL: http://arxiv.org/abs/2308.15283v2
- Date: Mon, 20 Nov 2023 15:03:58 GMT
- Title: Structural Node Embeddings with Homomorphism Counts
- Authors: Hinrikus Wolf, Luca Oeljeklaus, Pascal K\"uhner, Martin Grohe
- Abstract summary: homomorphism counts capture local structural information.
We experimentally show the effectiveness of homomorphism count based node embeddings.
Our approach capitalises on the efficient computability of graph homomorphism counts for bounded treewidth graph classes.
- Score: 2.0131144893314232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph homomorphism counts, first explored by Lov\'asz in 1967, have recently
garnered interest as a powerful tool in graph-based machine learning. Grohe
(PODS 2020) proposed the theoretical foundations for using homomorphism counts
in machine learning on graph level as well as node level tasks. By their very
nature, these capture local structural information, which enables the creation
of robust structural embeddings. While a first approach for graph level tasks
has been made by Nguyen and Maehara (ICML 2020), we experimentally show the
effectiveness of homomorphism count based node embeddings. Enriched with node
labels, node weights, and edge weights, these offer an interpretable
representation of graph data, allowing for enhanced explainability of machine
learning models.
We propose a theoretical framework for isomorphism-invariant homomorphism
count based embeddings which lend themselves to a wide variety of downstream
tasks. Our approach capitalises on the efficient computability of graph
homomorphism counts for bounded treewidth graph classes, rendering it a
practical solution for real-world applications. We demonstrate their
expressivity through experiments on benchmark datasets. Although our results do
not match the accuracy of state-of-the-art neural architectures, they are
comparable to other advanced graph learning models. Remarkably, our approach
demarcates itself by ensuring explainability for each individual feature. By
integrating interpretable machine learning algorithms like SVMs or Random
Forests, we establish a seamless, end-to-end explainable pipeline. Our study
contributes to the advancement of graph-based techniques that offer both
performance and interpretability.
Related papers
- Isomorphic-Consistent Variational Graph Auto-Encoders for Multi-Level
Graph Representation Learning [9.039193854524763]
We propose the Isomorphic-Consistent VGAE (IsoC-VGAE) for task-agnostic graph representation learning.
We first devise a decoding scheme to provide a theoretical guarantee of keeping the isomorphic consistency.
We then propose the Inverse Graph Neural Network (Inv-GNN) decoder as its intuitive realization.
arXiv Detail & Related papers (2023-12-09T10:16:53Z) - Bures-Wasserstein Means of Graphs [60.42414991820453]
We propose a novel framework for defining a graph mean via embeddings in the space of smooth graph signal distributions.
By finding a mean in this embedding space, we can recover a mean graph that preserves structural information.
We establish the existence and uniqueness of the novel graph mean, and provide an iterative algorithm for computing it.
arXiv Detail & Related papers (2023-05-31T11:04:53Z) - A Multi-scale Graph Signature for Persistence Diagrams based on Return
Probabilities of Random Walks [1.745838188269503]
We explore the use of a family of multi-scale graph signatures to enhance the robustness of topological features.
We propose a deep learning architecture to handle this set input.
Experiments on benchmark graph classification datasets demonstrate that our proposed architecture outperforms other persistent homology-based methods.
arXiv Detail & Related papers (2022-09-28T17:30:27Z) - Graph Self-supervised Learning with Accurate Discrepancy Learning [64.69095775258164]
We propose a framework that aims to learn the exact discrepancy between the original and the perturbed graphs, coined as Discrepancy-based Self-supervised LeArning (D-SLA)
We validate our method on various graph-related downstream tasks, including molecular property prediction, protein function prediction, and link prediction tasks, on which our model largely outperforms relevant baselines.
arXiv Detail & Related papers (2022-02-07T08:04:59Z) - Graph Kernel Neural Networks [53.91024360329517]
We propose to use graph kernels, i.e. kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain.
This allows us to define an entirely structural model that does not require computing the embedding of the input graph.
Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability.
arXiv Detail & Related papers (2021-12-14T14:48:08Z) - Learning to Learn Graph Topologies [27.782971146122218]
We learn a mapping from node data to the graph structure based on the idea of learning to optimise (L2O)
The model is trained in an end-to-end fashion with pairs of node data and graph samples.
Experiments on both synthetic and real-world data demonstrate that our model is more efficient than classic iterative algorithms in learning a graph with specific topological properties.
arXiv Detail & Related papers (2021-10-19T08:42:38Z) - Multi-Level Graph Contrastive Learning [38.022118893733804]
We propose a Multi-Level Graph Contrastive Learning (MLGCL) framework for learning robust representation of graph data by contrasting space views of graphs.
The original graph is first-order approximation structure and contains uncertainty or error, while the $k$NN graph generated by encoding features preserves high-order proximity.
Extensive experiments indicate MLGCL achieves promising results compared with the existing state-of-the-art graph representation learning methods on seven datasets.
arXiv Detail & Related papers (2021-07-06T14:24:43Z) - Self-supervised Graph-level Representation Learning with Local and
Global Structure [71.45196938842608]
We propose a unified framework called Local-instance and Global-semantic Learning (GraphLoG) for self-supervised whole-graph representation learning.
Besides preserving the local similarities, GraphLoG introduces the hierarchical prototypes to capture the global semantic clusters.
An efficient online expectation-maximization (EM) algorithm is further developed for learning the model.
arXiv Detail & Related papers (2021-06-08T05:25:38Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z) - Scaling up graph homomorphism for classification via sampling [1.662966122370634]
We study the use of graph homomorphism density features as a scalable alternative to homomorphism numbers.
We propose a high-performance implementation of a simple sampling algorithm which computes additive approximations of homomorphism densities.
arXiv Detail & Related papers (2021-04-08T20:25:37Z) - Improving Graph Neural Network Expressivity via Subgraph Isomorphism
Counting [63.04999833264299]
"Graph Substructure Networks" (GSN) is a topologically-aware message passing scheme based on substructure encoding.
We show that it is strictly more expressive than the Weisfeiler-Leman (WL) graph isomorphism test.
We perform an extensive evaluation on graph classification and regression tasks and obtain state-of-the-art results in diverse real-world settings.
arXiv Detail & Related papers (2020-06-16T15:30:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.