Learning Scalable Structural Representations for Link Prediction with
Bloom Signatures
- URL: http://arxiv.org/abs/2312.16784v1
- Date: Thu, 28 Dec 2023 02:21:40 GMT
- Title: Learning Scalable Structural Representations for Link Prediction with
Bloom Signatures
- Authors: Tianyi Zhang, Haoteng Yin, Rongzhe Wei, Pan Li, Anshumali Shrivastava
- Abstract summary: Graph neural networks (GNNs) are known to perform sub-optimally on link prediction tasks.
We propose to learn structural link representations by augmenting the message-passing framework of GNNs with Bloom signatures.
Our proposed model achieves comparable or better performance than existing edge-wise GNN models.
- Score: 39.63963077346406
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Graph neural networks (GNNs) have shown great potential in learning on
graphs, but they are known to perform sub-optimally on link prediction tasks.
Existing GNNs are primarily designed to learn node-wise representations and
usually fail to capture pairwise relations between target nodes, which proves
to be crucial for link prediction. Recent works resort to learning more
expressive edge-wise representations by enhancing vanilla GNNs with structural
features such as labeling tricks and link prediction heuristics, but they
suffer from high computational overhead and limited scalability. To tackle this
issue, we propose to learn structural link representations by augmenting the
message-passing framework of GNNs with Bloom signatures. Bloom signatures are
hashing-based compact encodings of node neighborhoods, which can be efficiently
merged to recover various types of edge-wise structural features. We further
show that any type of neighborhood overlap-based heuristic can be estimated by
a neural network that takes Bloom signatures as input. GNNs with Bloom
signatures are provably more expressive than vanilla GNNs and also more
scalable than existing edge-wise models. Experimental results on five standard
link prediction benchmarks show that our proposed model achieves comparable or
better performance than existing edge-wise GNN models while being 3-200
$\times$ faster and more memory-efficient for online inference.
Related papers
- Graph Neural Networks at a Fraction [1.8175282137722093]
This paper introduces Quaternion Message Passing Neural Networks (QMPNNs), a framework that leverages quaternion space to compute node representations.
We present a novel perspective on Graph Lottery Tickets, redefining their applicability within the context of GNNs and QMPNNs.
arXiv Detail & Related papers (2025-02-10T03:55:09Z) - PROXI: Challenging the GNNs for Link Prediction [3.8233569758620063]
We introduce PROXI, which leverages proximity information of node pairs in both graph and attribute spaces.
Standard machine learning (ML) models perform competitively, even outperforming cutting-edge GNN models.
We show that augmenting traditional GNNs with PROXI significantly boosts their link prediction performance.
arXiv Detail & Related papers (2024-10-02T17:57:38Z) - Efficient Link Prediction via GNN Layers Induced by Negative Sampling [86.87385758192566]
Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories.
We propose a novel GNN architecture whereby the emphforward pass explicitly depends on emphboth positive (as is typical) and negative (unique to our approach) edges.
This is achieved by recasting the embeddings themselves as minimizers of a forward-pass-specific energy function that favors separation of positive and negative samples.
arXiv Detail & Related papers (2023-10-14T07:02:54Z) - Neo-GNNs: Neighborhood Overlap-aware Graph Neural Networks for Link
Prediction [23.545059901853815]
Graph Neural Networks (GNNs) have been widely applied to various fields for learning over graphstructured data.
We propose Neighborhood Overlap-aware Graph Neural Networks (Neo-GNNs) that learn useful structural features from an adjacency overlapped neighborhoods for link prediction.
arXiv Detail & Related papers (2022-06-09T01:43:49Z) - Learnt Sparsification for Interpretable Graph Neural Networks [5.527927312898106]
We propose a novel method called Kedge for explicitly sparsifying the underlying graph by removing unnecessary neighbors.
Kedge learns edge masks in a modular fashion trained with any GNN allowing for gradient based optimization.
We show that Kedge effectively counters the over-smoothing phenomena in deep GNNs by maintaining good task performance with increasing GNN layers.
arXiv Detail & Related papers (2021-06-23T16:04:25Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Bilinear Graph Neural Network with Neighbor Interactions [106.80781016591577]
Graph Neural Network (GNN) is a powerful model to learn representations and make predictions on graph data.
We propose a new graph convolution operator, which augments the weighted sum with pairwise interactions of the representations of neighbor nodes.
We term this framework as Bilinear Graph Neural Network (BGNN), which improves GNN representation ability with bilinear interactions between neighbor nodes.
arXiv Detail & Related papers (2020-02-10T06:43:38Z) - EdgeNets:Edge Varying Graph Neural Networks [179.99395949679547]
This paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet.
An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors.
This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs)
arXiv Detail & Related papers (2020-01-21T15:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.