Graph Edit Distance with General Costs Using Neural Set Divergence
- URL: http://arxiv.org/abs/2409.17687v2
- Date: Mon, 4 Nov 2024 11:23:46 GMT
- Title: Graph Edit Distance with General Costs Using Neural Set Divergence
- Authors: Eeshaan Jain, Indradyumna Roy, Saswat Meher, Soumen Chakrabarti, Abir De,
- Abstract summary: Graph Edit Distance (GED) measures the (dis-)similarity between two graphs, in terms of the minimum-cost edit sequence that transforms one graph to the other.
We propose GRAPHEDX, a neural GED estimator that can work with general costs specified for edit operations.
Experiments on several datasets, under a variety of edit cost settings, show that GRAPHEDX consistently outperforms state-of-the-art computation and methods.
- Score: 40.79963604310166
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Edit Distance (GED) measures the (dis-)similarity between two given graphs, in terms of the minimum-cost edit sequence that transforms one graph to the other. However, the exact computation of GED is NP-Hard, which has recently motivated the design of neural methods for GED estimation. However, they do not explicitly account for edit operations with different costs. In response, we propose GRAPHEDX, a neural GED estimator that can work with general costs specified for the four edit operations, viz., edge deletion, edge addition, node deletion and node addition. We first present GED as a quadratic assignment problem (QAP) that incorporates these four costs. Then, we represent each graph as a set of node and edge embeddings and use them to design a family of neural set divergence surrogates. We replace the QAP terms corresponding to each operation with their surrogates. Computing such neural set divergence require aligning nodes and edges of the two graphs. We learn these alignments using a Gumbel-Sinkhorn permutation generator, additionally ensuring that the node and edge alignments are consistent with each other. Moreover, these alignments are cognizant of both the presence and absence of edges between node-pairs. Experiments on several datasets, under a variety of edit cost settings, show that GRAPHEDX consistently outperforms state-of-the-art methods and heuristics in terms of prediction error.
Related papers
- SG-Tailor: Inter-Object Commonsense Relationship Reasoning for Scene Graph Manipulation [73.76691480257851]
We introduce SG-Tailor, an autoregressive model that predicts the conflict-free relationship between any two nodes.
For edge modification, SG-Tailor employs a Cut-And-Stitch strategy to solve the conflicts and globally adjust the graph.
arXiv Detail & Related papers (2025-03-23T09:11:04Z) - GraphBridge: Towards Arbitrary Transfer Learning in GNNs [65.01790632978962]
GraphBridge is a novel framework to enable knowledge transfer across disparate tasks and domains in GNNs.
It allows for the augmentation of any pre-trained GNN with prediction heads and a bridging network that connects the input to the output layer.
Empirical validation, conducted over 16 datasets representative of these scenarios, confirms the framework's capacity for task- and domain-agnostic transfer learning.
arXiv Detail & Related papers (2025-02-26T15:57:51Z) - Graph Similarity Computation via Interpretable Neural Node Alignment [19.34487413883093]
Graph Edit Distance (GED) and Maximum Common Subgraphs (MCS) are commonly used metrics to evaluate graph similarity in practice.
Deep learning models are well-known black boxes'', thus the typically characteristic one-to-one node/subgraph alignment process cannot be seen.
We propose a novel interpretable neural node alignment model without relying on node alignment ground truth information.
arXiv Detail & Related papers (2024-12-13T10:23:27Z) - Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - EUGENE: Explainable Unsupervised Approximation of Graph Edit Distance with Generalized Edit Costs [20.3519249026886]
Graph Edit Distance (GED) is preferred for measuring inter-graph distance.
GED computation is hindered by NP-hardness.
Unsupervised methods often face challenges in providing accurate approximations.
arXiv Detail & Related papers (2024-02-08T18:23:05Z) - MATA*: Combining Learnable Node Matching with A* Algorithm for
Approximate Graph Edit Distance Computation [12.437507185260577]
The exact Graph Edit Distance (GED) computation is known to be NP-complete.
We present a data-driven hybrid approach MATA* for approximate GED computation based on Graph Neural Networks (GNNs) and A* algorithms.
arXiv Detail & Related papers (2023-11-04T09:33:08Z) - A Quasi-Wasserstein Loss for Learning Graph Neural Networks [32.11372485060082]
We propose a novel Quasi-Wasserstein (QW) loss with the help of the optimal transport defined on graphs.
We show that the proposed QW loss applies to various graph neural networks (GNNs) and helps to improve their performance in node-level classification and regression tasks.
arXiv Detail & Related papers (2023-10-18T07:39:05Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Graph Transformer GANs for Graph-Constrained House Generation [223.739067413952]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The GTGAN learns effective graph node relations in an end-to-end fashion for the challenging graph-constrained house generation task.
arXiv Detail & Related papers (2023-03-14T20:35:45Z) - Graph Neural Network Bandits [89.31889875864599]
We consider the bandit optimization problem with the reward function defined over graph-structured data.
Key challenges in this setting are scaling to large domains, and to graphs with many nodes.
We show that graph neural networks (GNNs) can be used to estimate the reward function.
arXiv Detail & Related papers (2022-07-13T18:12:36Z) - SoftEdge: Regularizing Graph Classification with Random Soft Edges [18.165965620873745]
Graph data augmentation plays a vital role in regularizing Graph Neural Networks (GNNs)
Simple edge and node manipulations can create graphs with an identical structure or indistinguishable structures to message passing GNNs but of conflict labels.
We propose SoftEdge, which assigns random weights to a portion of the edges of a given graph to construct dynamic neighborhoods over the graph.
arXiv Detail & Related papers (2022-04-21T20:12:36Z) - Variational Graph Normalized Auto-Encoders [4.416484585765027]
We show that graph autoencoders (GAEs) and variational graph autoencoders (VGAEs) do not work well in link predictions when a node whose degree is zero is involved.
We have found that GAEs/VGAEs make embeddings of isolated nodes close to zero regardless of their content features.
In this paper, we propose a novel Variational Graph Normalized AutoEncoder (VGNAE) that utilize $L$-normalization to derive better embeddings for isolated nodes.
arXiv Detail & Related papers (2021-08-18T08:56:04Z) - Line Graph Neural Networks for Link Prediction [71.00689542259052]
We consider the graph link prediction task, which is a classic graph analytical problem with many real-world applications.
In this formalism, a link prediction problem is converted to a graph classification task.
We propose to seek a radically different and novel path by making use of the line graphs in graph theory.
In particular, each node in a line graph corresponds to a unique edge in the original graph. Therefore, link prediction problems in the original graph can be equivalently solved as a node classification problem in its corresponding line graph, instead of a graph classification task.
arXiv Detail & Related papers (2020-10-20T05:54:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.