More Interpretable Graph Similarity Computation via Maximum Common
Subgraph Inference
- URL: http://arxiv.org/abs/2208.04580v1
- Date: Tue, 9 Aug 2022 07:37:47 GMT
- Title: More Interpretable Graph Similarity Computation via Maximum Common
Subgraph Inference
- Authors: Zixun Lan, Binjie Hong, Ye Ma, Fei Ma
- Abstract summary: Graph similarity measurement, which computes the distance/similarity between two graphs, arises in various graph-related tasks.
Recent learning-based methods lack interpretability, as they directly transform interaction information between two graphs into one hidden vector and then map it to similarity.
This study proposes a more interpretable end-to-end paradigm for graph similarity learning, named Similarity via Maximum Common Subgraph Inference (INFMCS)
- Score: 6.016052201518866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph similarity measurement, which computes the distance/similarity between
two graphs, arises in various graph-related tasks. Recent learning-based
methods lack interpretability, as they directly transform interaction
information between two graphs into one hidden vector and then map it to
similarity. To cope with this problem, this study proposes a more interpretable
end-to-end paradigm for graph similarity learning, named Similarity Computation
via Maximum Common Subgraph Inference (INFMCS). Our critical insight into
INFMCS is the strong correlation between similarity score and Maximum Common
Subgraph (MCS). We implicitly infer MCS to obtain the normalized MCS size, with
the supervision information being only the similarity score during training. To
capture more global information, we also stack some vanilla transformer encoder
layers with graph convolution layers and propose a novel permutation-invariant
node Positional Encoding. The entire model is quite simple yet effective.
Comprehensive experiments demonstrate that INFMCS consistently outperforms
state-of-the-art baselines for graph-graph classification and regression tasks.
Ablation experiments verify the effectiveness of the proposed computation
paradigm and other components. Also, visualization and statistics of results
reveal the interpretability of INFMCS.
Related papers
- M3C: A Framework towards Convergent, Flexible, and Unsupervised Learning
of Mixture Graph Matching and Clustering [57.947071423091415]
We introduce Minorize-Maximization Matching and Clustering (M3C), a learning-free algorithm that guarantees theoretical convergence.
We develop UM3C, an unsupervised model that incorporates novel edge-wise affinity learning and pseudo label selection.
Our method outperforms state-of-the-art graph matching and mixture graph matching and clustering approaches in both accuracy and efficiency.
arXiv Detail & Related papers (2023-10-27T19:40:34Z) - Deep Graph-Level Clustering Using Pseudo-Label-Guided Mutual Information
Maximization Network [31.38584638254226]
We study the problem of partitioning a set of graphs into different groups such that the graphs in the same group are similar while the graphs in different groups are dissimilar.
To solve the problem, we propose a novel method called Deep Graph-Level Clustering (DGLC)
Our DGLC achieves graph-level representation learning and graph-level clustering in an end-to-end manner.
arXiv Detail & Related papers (2023-02-05T12:28:08Z) - Beyond spectral gap: The role of the topology in decentralized learning [58.48291921602417]
In data-parallel optimization of machine learning models, workers collaborate to improve their estimates of the model.
This paper aims to paint an accurate picture of sparsely-connected distributed optimization when workers share the same data distribution.
Our theory matches empirical observations in deep learning, and accurately describes the relative merits of different graph topologies.
arXiv Detail & Related papers (2022-06-07T08:19:06Z) - CGMN: A Contrastive Graph Matching Network for Self-Supervised Graph
Similarity Learning [65.1042892570989]
We propose a contrastive graph matching network (CGMN) for self-supervised graph similarity learning.
We employ two strategies, namely cross-view interaction and cross-graph interaction, for effective node representation learning.
We transform node representations into graph-level representations via pooling operations for graph similarity computation.
arXiv Detail & Related papers (2022-05-30T13:20:26Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - ACTIVE:Augmentation-Free Graph Contrastive Learning for Partial
Multi-View Clustering [52.491074276133325]
We propose an augmentation-free graph contrastive learning framework to solve the problem of partial multi-view clustering.
The proposed approach elevates instance-level contrastive learning and missing data inference to the cluster-level, effectively mitigating the impact of individual missing data on clustering.
arXiv Detail & Related papers (2022-03-01T02:32:25Z) - Multilevel Graph Matching Networks for Deep Graph Similarity Learning [79.3213351477689]
We propose a multi-level graph matching network (MGMN) framework for computing the graph similarity between any pair of graph-structured objects.
To compensate for the lack of standard benchmark datasets, we have created and collected a set of datasets for both the graph-graph classification and graph-graph regression tasks.
Comprehensive experiments demonstrate that MGMN consistently outperforms state-of-the-art baseline models on both the graph-graph classification and graph-graph regression tasks.
arXiv Detail & Related papers (2020-07-08T19:48:19Z) - Graph Partitioning and Graph Neural Network based Hierarchical Graph
Matching for Graph Similarity Computation [5.710312846460821]
Graph similarity aims to predict a similarity score between one pair of graphs to facilitate downstream applications.
We propose a graph partitioning and graph neural network-based model, called PSimGNN, to effectively resolve this issue.
PSimGNN outperforms state-of-the-art methods in graph similarity computation tasks using approximate Graph Edit Distance (GED) as the graph similarity metric.
arXiv Detail & Related papers (2020-05-16T15:01:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.