MATA*: Combining Learnable Node Matching with A* Algorithm for
Approximate Graph Edit Distance Computation
- URL: http://arxiv.org/abs/2311.02356v1
- Date: Sat, 4 Nov 2023 09:33:08 GMT
- Title: MATA*: Combining Learnable Node Matching with A* Algorithm for
Approximate Graph Edit Distance Computation
- Authors: Junfeng Liu, Min Zhou, Shuai Ma, Lujia Pan
- Abstract summary: The exact Graph Edit Distance (GED) computation is known to be NP-complete.
We present a data-driven hybrid approach MATA* for approximate GED computation based on Graph Neural Networks (GNNs) and A* algorithms.
- Score: 12.437507185260577
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Edit Distance (GED) is a general and domain-agnostic metric to measure
graph similarity, widely used in graph search or retrieving tasks. However, the
exact GED computation is known to be NP-complete. For instance, the widely used
A* algorithms explore the entire search space to find the optimal solution
which inevitably suffers scalability issues. Learning-based methods apply graph
representation techniques to learn the GED by formulating a regression task,
which can not recover the edit path and lead to inaccurate GED approximation
(i.e., the predicted GED is smaller than the exact). To this end, in this work,
we present a data-driven hybrid approach MATA* for approximate GED computation
based on Graph Neural Networks (GNNs) and A* algorithms, which models from the
perspective of learning to match nodes instead of directly regressing GED.
Specifically, aware of the structure-dominant operations (i.e.,node and edge
insertion/deletion) property in GED computation, a structure-enhanced GNN is
firstly designed to jointly learn local and high-order structural information
for node embeddings for node matchings. Second, top-k candidate nodes are
produced via a differentiable top-k operation to enable the training for node
matchings, which is adhering to another property of GED, i.e., multiple optimal
node matchings. Third, benefiting from the candidate nodes, MATA* only performs
on the promising search directions, reaching the solution efficiently. Finally,
extensive experiments show the superiority of MATA* as it significantly
outperforms the combinatorial search-based, learning-based and hybrid methods
and scales well to large-size graphs.
Related papers
- Computing Approximate Graph Edit Distance via Optimal Transport [16.327678462502668]
Given a graph pair $(G1, G2)$, graph edit distance (GED) is defined as the minimum number of edit operations converting $G1$ to $G2$.
GEDIOT is based on inverse optimal transport that leverages a learnable Sinkhorn algorithm to generate the coupling matrix.
GEDGW, models GED computation as a linear combination of optimal transport and its variant, Gromov-Wasserstein discrepancy, for node and edge operations.
arXiv Detail & Related papers (2024-12-25T09:55:14Z) - Graph Similarity Computation via Interpretable Neural Node Alignment [19.34487413883093]
Graph Edit Distance (GED) and Maximum Common Subgraphs (MCS) are commonly used metrics to evaluate graph similarity in practice.
Deep learning models are well-known black boxes'', thus the typically characteristic one-to-one node/subgraph alignment process cannot be seen.
We propose a novel interpretable neural node alignment model without relying on node alignment ground truth information.
arXiv Detail & Related papers (2024-12-13T10:23:27Z) - Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Efficient Graph Similarity Computation with Alignment Regularization [7.143879014059894]
Graph similarity computation (GSC) is a learning-based prediction task using Graph Neural Networks (GNNs)
We show that high-quality learning can be attained with a simple yet powerful regularization technique, which we call the Alignment Regularization (AReg)
In the inference stage, the graph-level representations learned by the GNN encoder are directly used to compute the similarity score without using AReg again to speed up inference.
arXiv Detail & Related papers (2024-06-21T07:37:28Z) - Ensemble Quadratic Assignment Network for Graph Matching [52.20001802006391]
Graph matching is a commonly used technique in computer vision and pattern recognition.
Recent data-driven approaches have improved the graph matching accuracy remarkably.
We propose a graph neural network (GNN) based approach to combine the advantages of data-driven and traditional methods.
arXiv Detail & Related papers (2024-03-11T06:34:05Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - A Simple and Scalable Graph Neural Network for Large Directed Graphs [11.792826520370774]
We investigate various combinations of node representations and edge direction awareness within an input graph.
In response, we propose a simple yet holistic classification method A2DUG.
We demonstrate that A2DUG stably performs well on various datasets and improves the accuracy up to 11.29 compared with the state-of-the-art methods.
arXiv Detail & Related papers (2023-06-14T06:24:58Z) - Graph Entropy Guided Node Embedding Dimension Selection for Graph Neural
Networks [74.26734952400925]
We propose a novel Minimum Graph Entropy (MinGE) algorithm for Node Embedding Dimension Selection (NEDS)
MinGE considers both feature entropy and structure entropy on graphs, which are carefully designed according to the characteristics of the rich information in them.
Experiments with popular Graph Neural Networks (GNNs) on benchmark datasets demonstrate the effectiveness and generalizability of our proposed MinGE.
arXiv Detail & Related papers (2021-05-07T11:40:29Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.