Efficient Link Prediction via GNN Layers Induced by Negative Sampling
- URL: http://arxiv.org/abs/2310.09516v1
- Date: Sat, 14 Oct 2023 07:02:54 GMT
- Title: Efficient Link Prediction via GNN Layers Induced by Negative Sampling
- Authors: Yuxin Wang, Xiannian Hu, Quan Gan, Xuanjing Huang, Xipeng Qiu, David
Wipf
- Abstract summary: Graph neural networks (GNNs) for link prediction can loosely be divided into two broad categories.
First, emphnode-wise architectures pre-compute individual embeddings for each node that are later combined by a simple decoder to make predictions.
Second, emphedge-wise methods rely on the formation of edge-specific subgraph embeddings to enrich the representation of pair-wise relationships.
- Score: 92.05291395292537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) for link prediction can loosely be divided into
two broad categories. First, \emph{node-wise} architectures pre-compute
individual embeddings for each node that are later combined by a simple decoder
to make predictions. While extremely efficient at inference time (since node
embeddings are only computed once and repeatedly reused), model expressiveness
is limited such that isomorphic nodes contributing to candidate edges may not
be distinguishable, compromising accuracy. In contrast, \emph{edge-wise}
methods rely on the formation of edge-specific subgraph embeddings to enrich
the representation of pair-wise relationships, disambiguating isomorphic nodes
to improve accuracy, but with the cost of increased model complexity. To better
navigate this trade-off, we propose a novel GNN architecture whereby the
\emph{forward pass} explicitly depends on \emph{both} positive (as is typical)
and negative (unique to our approach) edges to inform more flexible, yet still
cheap node-wise embeddings. This is achieved by recasting the embeddings
themselves as minimizers of a forward-pass-specific energy function (distinct
from the actual training loss) that favors separation of positive and negative
samples. As demonstrated by extensive empirical evaluations, the resulting
architecture retains the inference speed of node-wise models, while producing
competitive accuracy with edge-wise alternatives.
Related papers
- Sparse Decomposition of Graph Neural Networks [20.768412002413843]
We propose an approach to reduce the number of nodes that are included during aggregation.
We achieve this through a sparse decomposition, learning to approximate node representations using a weighted sum of linearly transformed features.
We demonstrate via extensive experiments that our method outperforms other baselines designed for inference speedup.
arXiv Detail & Related papers (2024-10-25T17:52:16Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Refined Edge Usage of Graph Neural Networks for Edge Prediction [51.06557652109059]
We propose a novel edge prediction paradigm named Edge-aware Message PassIng neuRal nEtworks (EMPIRE)
We first introduce an edge splitting technique to specify use of each edge where each edge is solely used as either the topology or the supervision.
In order to emphasize the differences between pairs connected by supervision edges and pairs unconnected, we further weight the messages to highlight the relative ones that can reflect the differences.
arXiv Detail & Related papers (2022-12-25T23:19:56Z) - Relation-aware Graph Attention Model With Adaptive Self-adversarial
Training [29.240686573485718]
This paper describes an end-to-end solution for the relationship prediction task in heterogeneous, multi-relational graphs.
We particularly address two building blocks in the pipeline, namely heterogeneous graph representation learning and negative sampling.
We introduce a parameter-free negative sampling technique -- adaptive self-adversarial (ASA) negative sampling.
arXiv Detail & Related papers (2021-02-14T16:11:56Z) - Adversarial Permutation Guided Node Representations for Link Prediction [27.31800918961859]
Link prediction (LP) algorithm identifies node pairs between which new edges will likely materialize in future.
Most LP algorithms estimate a score for currently non-neighboring node pairs, and rank them by this score.
We propose PermGNN, which aggregates neighbor features using a recurrent, order-sensitive aggregator and directly minimizes an LP loss while it is attacked' by adversarial generator of neighbor permutations.
arXiv Detail & Related papers (2020-12-13T03:52:25Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z) - PushNet: Efficient and Adaptive Neural Message Passing [1.9121961872220468]
Message passing neural networks have recently evolved into a state-of-the-art approach to representation learning on graphs.
Existing methods perform synchronous message passing along all edges in multiple subsequent rounds.
We consider a novel asynchronous message passing approach where information is pushed only along the most relevant edges until convergence.
arXiv Detail & Related papers (2020-03-04T18:15:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.