Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph
Link Prediction
- URL: http://arxiv.org/abs/2006.06648v3
- Date: Thu, 29 Oct 2020 10:03:57 GMT
- Title: Learning to Extrapolate Knowledge: Transductive Few-shot Out-of-Graph
Link Prediction
- Authors: Jinheon Baek, Dong Bok Lee, Sung Ju Hwang
- Abstract summary: We introduce a realistic problem of few-shot out-of-graph link prediction.
We tackle this problem with a novel transductive meta-learning framework.
We validate our model on multiple benchmark datasets for knowledge graph completion and drug-drug interaction prediction.
- Score: 69.1473775184952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many practical graph problems, such as knowledge graph construction and
drug-drug interaction prediction, require to handle multi-relational graphs.
However, handling real-world multi-relational graphs with Graph Neural Networks
(GNNs) is often challenging due to their evolving nature, as new entities
(nodes) can emerge over time. Moreover, newly emerged entities often have few
links, which makes the learning even more difficult. Motivated by this
challenge, we introduce a realistic problem of few-shot out-of-graph link
prediction, where we not only predict the links between the seen and unseen
nodes as in a conventional out-of-knowledge link prediction task but also
between the unseen nodes, with only few edges per node. We tackle this problem
with a novel transductive meta-learning framework which we refer to as Graph
Extrapolation Networks (GEN). GEN meta-learns both the node embedding network
for inductive inference (seen-to-unseen) and the link prediction network for
transductive inference (unseen-to-unseen). For transductive link prediction, we
further propose a stochastic embedding layer to model uncertainty in the link
prediction between unseen entities. We validate our model on multiple benchmark
datasets for knowledge graph completion and drug-drug interaction prediction.
The results show that our model significantly outperforms relevant baselines
for out-of-graph link prediction tasks.
Related papers
- Inductive Link Prediction in Knowledge Graphs using Path-based Neural Networks [1.3735277588793995]
SiaILP is a path-based model for inductive link prediction using siamese neural networks.
Our model achieves several new state-of-the-art performances in link prediction tasks using inductive versions of WN18RR, FB15k-237, and Nell995.
arXiv Detail & Related papers (2023-12-16T02:26:09Z) - Disentangling Node Attributes from Graph Topology for Improved
Generalizability in Link Prediction [5.651457382936249]
Our proposed method, UPNA, solves the inductive link prediction problem by learning a function that takes a pair of node attributes and predicts the probability of an edge.
UPNA can be applied to various pairwise learning tasks and integrated with existing link prediction models to enhance their generalizability and bolster graph generative models.
arXiv Detail & Related papers (2023-07-17T22:19:12Z) - Towards Few-shot Inductive Link Prediction on Knowledge Graphs: A
Relational Anonymous Walk-guided Neural Process Approach [49.00753238429618]
Few-shot inductive link prediction on knowledge graphs aims to predict missing links for unseen entities with few-shot links observed.
Recent inductive methods utilize the sub-graphs around unseen entities to obtain the semantics and predict links inductively.
We propose a novel relational anonymous walk-guided neural process for few-shot inductive link prediction on knowledge graphs, denoted as RawNP.
arXiv Detail & Related papers (2023-06-26T12:02:32Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - You Only Transfer What You Share: Intersection-Induced Graph Transfer
Learning for Link Prediction [79.15394378571132]
We investigate a previously overlooked phenomenon: in many cases, a densely connected, complementary graph can be found for the original graph.
The denser graph may share nodes with the original graph, which offers a natural bridge for transferring selective, meaningful knowledge.
We identify this setting as Graph Intersection-induced Transfer Learning (GITL), which is motivated by practical applications in e-commerce or academic co-authorship predictions.
arXiv Detail & Related papers (2023-02-27T22:56:06Z) - Generative Graph Neural Networks for Link Prediction [13.643916060589463]
Inferring missing links or detecting spurious ones based on observed graphs, known as link prediction, is a long-standing challenge in graph data analysis.
This paper proposes a novel and radically different link prediction algorithm based on the network reconstruction theory, called GraphLP.
Unlike the discriminative neural network models used for link prediction, GraphLP is generative, which provides a new paradigm for neural-network-based link prediction.
arXiv Detail & Related papers (2022-12-31T10:07:19Z) - FakeEdge: Alleviate Dataset Shift in Link Prediction [16.161812856581676]
In a link prediction task, links in the training set are always present while ones in the testing set are not yet formed, resulting in a discrepancy of the connectivity pattern and bias of the learned representation.
We propose FakeEdge, a model-agnostic technique, to address the problem by mitigating the graph topological gap between training and testing sets.
arXiv Detail & Related papers (2022-11-29T03:36:01Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - How Neural Processes Improve Graph Link Prediction [35.652234989200956]
We propose a meta-learning approach with graph neural networks for link prediction: Neural Processes for Graph Neural Networks (NPGNN)
NPGNN can perform both transductive and inductive learning tasks and adapt to patterns in a large new graph after training with a small subgraph.
arXiv Detail & Related papers (2021-09-30T07:35:13Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.