GREED: A Neural Framework for Learning Graph Distance Functions
- URL: http://arxiv.org/abs/2112.13143v3
- Date: Fri, 21 Apr 2023 22:52:25 GMT
- Title: GREED: A Neural Framework for Learning Graph Distance Functions
- Authors: Rishabh Ranjan, Siddharth Grover, Sourav Medya, Venkatesan
Chakaravarthy, Yogish Sabharwal, Sayan Ranu
- Abstract summary: We design a novel siamese graph neural network called GREED, which learns GED and SED in a property-preserving manner.
Through extensive 10 real graph datasets containing up to 7 million edges, we establish that GREED is not only more accurate than the state of the art, but also up to 3 orders of magnitude faster.
- Score: 8.323580169763726
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Among various distance functions for graphs, graph and subgraph edit
distances (GED and SED respectively) are two of the most popular and expressive
measures. Unfortunately, exact computations for both are NP-hard. To overcome
this computational bottleneck, neural approaches to learn and predict edit
distance in polynomial time have received much interest. While considerable
progress has been made, there exist limitations that need to be addressed.
First, the efficacy of an approximate distance function lies not only in its
approximation accuracy, but also in the preservation of its properties. To
elaborate, although GED is a metric, its neural approximations do not provide
such a guarantee. This prohibits their usage in higher order tasks that rely on
metric distance functions, such as clustering or indexing. Second, several
existing frameworks for GED do not extend to SED due to SED being asymmetric.
In this work, we design a novel siamese graph neural network called GREED,
which through a carefully crafted inductive bias, learns GED and SED in a
property-preserving manner. Through extensive experiments across 10 real graph
datasets containing up to 7 million edges, we establish that GREED is not only
more accurate than the state of the art, but also up to 3 orders of magnitude
faster. Even more significantly, due to preserving the triangle inequality, the
generated embeddings are indexable and consequently, even in a CPU-only
environment, GREED is up to 50 times faster than GPU-powered baselines for
graph / subgraph retrieval.
Related papers
- SCARA: Scalable Graph Neural Networks with Feature-Oriented Optimization [23.609017952951454]
We propose SCARA, a scalable Graph Neural Network (GNN) with feature-oriented optimization for graph computation.
SCARA efficiently computes graph embedding from node features, and further selects and reuses feature results to reduce overhead.
It is efficient to process precomputation on the largest available billion-scale GNN dataset Papers100M (111M nodes, 1.6B edges) in 100 seconds.
arXiv Detail & Related papers (2022-07-19T10:32:11Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Neighbor2Seq: Deep Learning on Massive Graphs by Transforming Neighbors
to Sequences [55.329402218608365]
We propose the Neighbor2Seq to transform the hierarchical neighborhood of each node into a sequence.
We evaluate our method on a massive graph with more than 111 million nodes and 1.6 billion edges.
Results show that our proposed method is scalable to massive graphs and achieves superior performance across massive and medium-scale graphs.
arXiv Detail & Related papers (2022-02-07T16:38:36Z) - Source Free Unsupervised Graph Domain Adaptation [60.901775859601685]
Unsupervised Graph Domain Adaptation (UGDA) shows its practical value of reducing the labeling cost for node classification.
Most existing UGDA methods heavily rely on the labeled graph in the source domain.
In some real-world scenarios, the source graph is inaccessible because of privacy issues.
We propose a novel scenario named Source Free Unsupervised Graph Domain Adaptation (SFUGDA)
arXiv Detail & Related papers (2021-12-02T03:18:18Z) - Scalable Graph Neural Networks via Bidirectional Propagation [89.70835710988395]
Graph Neural Networks (GNN) is an emerging field for learning on non-Euclidean data.
This paper presents GBP, a scalable GNN that utilizes a localized bidirectional propagation process from both the feature vectors and the training/testing nodes.
An empirical study demonstrates that GBP achieves state-of-the-art performance with significantly less training/testing time.
arXiv Detail & Related papers (2020-10-29T08:55:33Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Pseudoinverse Graph Convolutional Networks: Fast Filters Tailored for
Large Eigengaps of Dense Graphs and Hypergraphs [0.0]
Graph Convolutional Networks (GCNs) have proven to be successful tools for semi-supervised classification on graph-based datasets.
We propose a new GCN variant whose three-part filter space is targeted at dense graphs.
arXiv Detail & Related papers (2020-08-03T08:48:41Z) - Scaling Graph Neural Networks with Approximate PageRank [64.92311737049054]
We present the PPRGo model which utilizes an efficient approximation of information diffusion in GNNs.
In addition to being faster, PPRGo is inherently scalable, and can be trivially parallelized for large datasets like those found in industry settings.
We show that training PPRGo and predicting labels for all nodes in this graph takes under 2 minutes on a single machine, far outpacing other baselines on the same graph.
arXiv Detail & Related papers (2020-07-03T09:30:07Z) - Isometric Graph Neural Networks [5.306334746787569]
We propose a technique to learn Isometric Graph Neural Networks (IGNN)
IGNN requires changing the input representation space and loss function to enable any GNN algorithm to generate representations that reflect distances between nodes.
We observe a consistent and substantial improvement as high as 400% in Kendall's Tau (KT)
arXiv Detail & Related papers (2020-06-16T22:51:13Z) - Graph Neural Distance Metric Learning with Graph-Bert [10.879701971582502]
We will introduce a new graph neural network based distance metric learning approaches, namely GB-DISTANCE (GRAPH-BERT based Neural Distance)
GB-DISTANCE can learn graph representations effectively based on a pre-trained GRAPH-BERT model.
In addition, GB-DISTANCE can also maintain the distance metric basic properties.
arXiv Detail & Related papers (2020-02-09T18:58:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.