Normalizing Flow-based Neural Process for Few-Shot Knowledge Graph
Completion
- URL: http://arxiv.org/abs/2304.08183v1
- Date: Mon, 17 Apr 2023 11:42:28 GMT
- Title: Normalizing Flow-based Neural Process for Few-Shot Knowledge Graph
Completion
- Authors: Linhao Luo, Yuan-Fang Li, Gholamreza Haffari, and Shirui Pan
- Abstract summary: Few-shot knowledge graph completion (FKGC) aims to predict missing facts for unseen relations with few-shot associated facts.
Existing FKGC methods are based on metric learning or meta-learning, which often suffer from the out-of-distribution and overfitting problems.
In this paper, we propose a normalizing flow-based neural process for few-shot knowledge graph completion (NP-FKGC)
- Score: 69.55700751102376
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graphs (KGs), as a structured form of knowledge representation,
have been widely applied in the real world. Recently, few-shot knowledge graph
completion (FKGC), which aims to predict missing facts for unseen relations
with few-shot associated facts, has attracted increasing attention from
practitioners and researchers. However, existing FKGC methods are based on
metric learning or meta-learning, which often suffer from the
out-of-distribution and overfitting problems. Meanwhile, they are incompetent
at estimating uncertainties in predictions, which is critically important as
model predictions could be very unreliable in few-shot settings. Furthermore,
most of them cannot handle complex relations and ignore path information in
KGs, which largely limits their performance. In this paper, we propose a
normalizing flow-based neural process for few-shot knowledge graph completion
(NP-FKGC). Specifically, we unify normalizing flows and neural processes to
model a complex distribution of KG completion functions. This offers a novel
way to predict facts for few-shot relations while estimating the uncertainty.
Then, we propose a stochastic ManifoldE decoder to incorporate the neural
process and handle complex relations in few-shot settings. To further improve
performance, we introduce an attentive relation path-based graph neural network
to capture path information in KGs. Extensive experiments on three public
datasets demonstrate that our method significantly outperforms the existing
FKGC methods and achieves state-of-the-art performance. Code is available at
https://github.com/RManLuo/NP-FKGC.git.
Related papers
- Graph Stochastic Neural Process for Inductive Few-shot Knowledge Graph Completion [63.68647582680998]
We focus on a task called inductive few-shot knowledge graph completion (I-FKGC)
Inspired by the idea of inductive reasoning, we cast I-FKGC as an inductive reasoning problem.
We present a neural process-based hypothesis extractor that models the joint distribution of hypothesis, from which we can sample a hypothesis for predictions.
In the second module, based on the hypothesis, we propose a graph attention-based predictor to test if the triple in the query set aligns with the extracted hypothesis.
arXiv Detail & Related papers (2024-08-03T13:37:40Z) - Probabilistically Rewired Message-Passing Neural Networks [41.554499944141654]
Message-passing graph neural networks (MPNNs) emerged as powerful tools for processing graph-structured input.
MPNNs operate on a fixed input graph structure, ignoring potential noise and missing information.
We devise probabilistically rewired MPNNs (PR-MPNNs) which learn to add relevant edges while omitting less beneficial ones.
arXiv Detail & Related papers (2023-10-03T15:43:59Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z) - Simple and Effective Relation-based Embedding Propagation for Knowledge
Representation Learning [15.881121633396832]
We propose the Relation-based Embedding Propagation (REP) method to adapt pretrained graph embeddings with context.
We show that REP brings about 10% relative improvement to triplet-based embedding methods on OGBL-WikiKG2.
It takes 5%-83% time to achieve comparable results as the state-of-the-art GC-OTE.
arXiv Detail & Related papers (2022-05-13T06:02:13Z) - Tackling Oversmoothing of GNNs with Contrastive Learning [35.88575306925201]
Graph neural networks (GNNs) integrate the comprehensive relation of graph data and representation learning capability.
Oversmoothing makes the final representations of nodes indiscriminative, thus deteriorating the node classification and link prediction performance.
We propose the Topology-guided Graph Contrastive Layer, named TGCL, which is the first de-oversmoothing method maintaining all three mentioned metrics.
arXiv Detail & Related papers (2021-10-26T15:56:16Z) - Learning Intents behind Interactions with Knowledge Graph for
Recommendation [93.08709357435991]
Knowledge graph (KG) plays an increasingly important role in recommender systems.
Existing GNN-based models fail to identify user-item relation at a fine-grained level of intents.
We propose a new model, Knowledge Graph-based Intent Network (KGIN)
arXiv Detail & Related papers (2021-02-14T03:21:36Z) - Exploring the Limits of Few-Shot Link Prediction in Knowledge Graphs [49.6661602019124]
We study a spectrum of models derived by generalizing the current state of the art for few-shot link prediction.
We find that a simple zero-shot baseline - which ignores any relation-specific information - achieves surprisingly strong performance.
Experiments on carefully crafted synthetic datasets show that having only a few examples of a relation fundamentally limits models from using fine-grained structural information.
arXiv Detail & Related papers (2021-02-05T21:04:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.