GED-Consistent Disentanglement of Aligned and Unaligned Substructures for Graph Similarity Learning
- URL: http://arxiv.org/abs/2511.19837v1
- Date: Tue, 25 Nov 2025 02:07:30 GMT
- Title: GED-Consistent Disentanglement of Aligned and Unaligned Substructures for Graph Similarity Learning
- Authors: Zhentao Zhan, Xiaoliang Xu, Jingjing Wang, Junmei Wang,
- Abstract summary: We propose GCGSim, a GED-consistent graph similarity learning framework centering on graph-level matching and substructure-level edit costs.<n>Our experiments on four benchmark datasets show that GCGSim achieves state-of-the-art performance.
- Score: 8.811956084670328
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Graph Similarity Computation (GSC) is a fundamental graph related task where Graph Edit Distance (GED) serves as a prevalent metric. GED is determined by an optimal alignment between a pair of graphs that partitions each into aligned (zero-cost) and unaligned (cost-incurring) substructures. Due to NP-hard nature of exact GED computation, GED approximations based on Graph Neural Network(GNN) have emerged. Existing GNN-based GED approaches typically learn node embeddings for each graph and then aggregate pairwise node similarities to estimate the final similarity. Despite their effectiveness, we identify a mismatch between this prevalent node-centric matching paradigm and the core principles of GED. This discrepancy leads to two critical limitations: (1) a failure to capture the global structural correspondence for optimal alignment, and (2) a misattribution of edit costs driven by spurious node level signals. To address these limitations, we propose GCGSim, a GED-consistent graph similarity learning framework centering on graph-level matching and substructure-level edit costs. Specifically, we make three core technical contributions. Extensive experiments on four benchmark datasets show that GCGSim achieves state-of-the-art performance. Our comprehensive analyses further validate that the framework effectively learns disentangled and semantically meaningful substructure representations.
Related papers
- Graph2Region: Efficient Graph Similarity Learning with Structure and Scale Restoration [23.884990933878726]
Graph similarity is critical in graph-related tasks such as graph retrieval, where metrics like maximum common subgraph (MCS) and graph edit distance (GED) are commonly used.<n>Recent neural network-based approaches approximate the similarity score in embedding spaces to alleviate the computational burden.<n>We propose a novel geometric-based graph embedding method called Graph2Region (G2R)
arXiv Detail & Related papers (2025-10-01T01:10:18Z) - Towards Unsupervised Training of Matching-based Graph Edit Distance Solver via Preference-aware GAN [20.871710686180357]
Graph Edit Distance (GED) is a fundamental graph similarity metric widely used in various applications.<n>Recent state-of-the-art hybrid GED solver has shown promising performance.<n>We propose GEDRanker, a novel unsupervised GAN-based framework for GED computation.
arXiv Detail & Related papers (2025-05-16T03:45:31Z) - Graph Similarity Computation via Interpretable Neural Node Alignment [19.34487413883093]
Graph Edit Distance (GED) and Maximum Common Subgraphs (MCS) are commonly used metrics to evaluate graph similarity in practice.<n>Deep learning models are well-known black boxes'', thus the typically characteristic one-to-one node/subgraph alignment process cannot be seen.<n>We propose a novel interpretable neural node alignment model without relying on node alignment ground truth information.
arXiv Detail & Related papers (2024-12-13T10:23:27Z) - Efficient Graph Similarity Computation with Alignment Regularization [7.143879014059894]
Graph similarity computation (GSC) is a learning-based prediction task using Graph Neural Networks (GNNs)
We show that high-quality learning can be attained with a simple yet powerful regularization technique, which we call the Alignment Regularization (AReg)
In the inference stage, the graph-level representations learned by the GNN encoder are directly used to compute the similarity score without using AReg again to speed up inference.
arXiv Detail & Related papers (2024-06-21T07:37:28Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - MATA*: Combining Learnable Node Matching with A* Algorithm for
Approximate Graph Edit Distance Computation [12.437507185260577]
The exact Graph Edit Distance (GED) computation is known to be NP-complete.
We present a data-driven hybrid approach MATA* for approximate GED computation based on Graph Neural Networks (GNNs) and A* algorithms.
arXiv Detail & Related papers (2023-11-04T09:33:08Z) - Graph Spectral Embedding using the Geodesic Betweeness Centrality [76.27138343125985]
We introduce the Graph Sylvester Embedding (GSE), an unsupervised graph representation of local similarity, connectivity, and global structure.
GSE uses the solution of the Sylvester equation to capture both network structure and neighborhood proximity in a single representation.
arXiv Detail & Related papers (2022-05-07T04:11:23Z) - A Variational Edge Partition Model for Supervised Graph Representation
Learning [51.30365677476971]
This paper introduces a graph generative process to model how the observed edges are generated by aggregating the node interactions over a set of overlapping node communities.
We partition each edge into the summation of multiple community-specific weighted edges and use them to define community-specific GNNs.
A variational inference framework is proposed to jointly learn a GNN based inference network that partitions the edges into different communities, these community-specific GNNs, and a GNN based predictor that combines community-specific GNNs for the end classification task.
arXiv Detail & Related papers (2022-02-07T14:37:50Z) - Training Free Graph Neural Networks for Graph Matching [103.45755859119035]
TFGM is a framework to boost the performance of Graph Neural Networks (GNNs) based graph matching without training.
Applying TFGM on various GNNs shows promising improvements over baselines.
arXiv Detail & Related papers (2022-01-14T09:04:46Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Inverse Graph Identification: Can We Identify Node Labels Given Graph
Labels? [89.13567439679709]
Graph Identification (GI) has long been researched in graph learning and is essential in certain applications.
This paper defines a novel problem dubbed Inverse Graph Identification (IGI)
We propose a simple yet effective method that makes the node-level message passing process using Graph Attention Network (GAT) under the protocol of GI.
arXiv Detail & Related papers (2020-07-12T12:06:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.