xNeuSM: Explainable Neural Subgraph Matching with Graph Learnable
Multi-hop Attention Networks
- URL: http://arxiv.org/abs/2312.01612v1
- Date: Mon, 4 Dec 2023 04:03:30 GMT
- Title: xNeuSM: Explainable Neural Subgraph Matching with Graph Learnable
Multi-hop Attention Networks
- Authors: Duc Q. Nguyen, Thanh Toan Nguyen, Tho quan
- Abstract summary: Subgraph matching is a challenging problem with a wide range of applications in database systems, biochemistry, and cognitive science.
Traditional graph-matching algorithms provide precise results but face challenges in large graph instances due to the NP-complete problem.
This article introduces Graph Learnable Multi-hop Attention Networks (GLeMA) that adaptively learns the parameters governing the attention factor decay for each node across hops.
- Score: 2.084955943646144
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Subgraph matching is a challenging problem with a wide range of applications
in database systems, biochemistry, and cognitive science. It involves
determining whether a given query graph is present within a larger target
graph. Traditional graph-matching algorithms provide precise results but face
challenges in large graph instances due to the NP-complete problem, limiting
their practical applicability. In contrast, recent neural network-based
approximations offer more scalable solutions, but often lack interpretable node
correspondences. To address these limitations, this article presents xNeuSM:
Explainable Neural Subgraph Matching which introduces Graph Learnable Multi-hop
Attention Networks (GLeMA) that adaptively learns the parameters governing the
attention factor decay for each node across hops rather than relying on fixed
hyperparameters. We provide a theoretical analysis establishing error bounds
for GLeMA's approximation of multi-hop attention as a function of the number of
hops. Additionally, we prove that learning distinct attention decay factors for
each node leads to a correct approximation of multi-hop attention. Empirical
evaluation on real-world datasets shows that xNeuSM achieves substantial
improvements in prediction accuracy of up to 34% compared to approximate
baselines and, notably, at least a seven-fold faster query time than exact
algorithms. The source code of our implementation is available at
https://github.com/martinakaduc/xNeuSM.
Related papers
- Heuristic Learning with Graph Neural Networks: A Unified Framework for Link Prediction [25.87108956561691]
Link prediction is a fundamental task in graph learning, inherently shaped by the topology of the graph.
We propose a unified matrix formulation to accommodate and generalize various weights.
We also propose the Heuristic Learning Graph Neural Network (HL-GNN) to efficiently implement the formulation.
arXiv Detail & Related papers (2024-06-12T08:05:45Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - Subgraph Matching via Query-Conditioned Subgraph Matching Neural
Networks and Bi-Level Tree Search [33.9052190473029]
Subgraph Matching is a core operation in graph database search, biomedical analysis, social group finding, etc.
In this paper, we propose a novel encoder-decoder neural network architecture to dynamically compute the matching information between the query and the target graphs.
Experiments on five large real-world target graphs show that N-BLS can significantly improve the subgraph matching performance.
arXiv Detail & Related papers (2022-07-21T04:47:21Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Neural Subgraph Matching [57.05893848555512]
NeuroMatch is an accurate, efficient, and robust neural approach to subgraph matching.
NeuroMatch is 100x faster than existing geometric approaches and 18% more accurate than existing subgraph matching methods.
arXiv Detail & Related papers (2020-07-06T22:06:38Z) - Scaling Graph Neural Networks with Approximate PageRank [64.92311737049054]
We present the PPRGo model which utilizes an efficient approximation of information diffusion in GNNs.
In addition to being faster, PPRGo is inherently scalable, and can be trivially parallelized for large datasets like those found in industry settings.
We show that training PPRGo and predicting labels for all nodes in this graph takes under 2 minutes on a single machine, far outpacing other baselines on the same graph.
arXiv Detail & Related papers (2020-07-03T09:30:07Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z) - GLSearch: Maximum Common Subgraph Detection via Learning to Search [33.9052190473029]
We propose GLSearch, a Graph Neural Network (GNN) based learning to search model.
Our model is built upon the branch and bound bound, which selects one pair of nodes from the two input graphs to expand at a time.
Our GLSearch can be potentially extended to solve many other problems with constraints on graphs.
arXiv Detail & Related papers (2020-02-08T10:03:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.