KG-NSF: Knowledge Graph Completion with a Negative-Sample-Free Approach
- URL: http://arxiv.org/abs/2207.14617v1
- Date: Fri, 29 Jul 2022 11:39:04 GMT
- Title: KG-NSF: Knowledge Graph Completion with a Negative-Sample-Free Approach
- Authors: Adil Bahaj and Safae Lhazmir and Mounir Ghogho
- Abstract summary: We propose bftextKG-NSF, a negative sampling-free framework for learning KG embeddings based on the cross-correlation matrices of embedding vectors.
It is shown that the proposed method achieves comparable link prediction performance to negative sampling-based methods while converging much faster.
- Score: 4.146672630717471
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Knowledge Graph (KG) completion is an important task that greatly benefits
knowledge discovery in many fields (e.g. biomedical research). In recent years,
learning KG embeddings to perform this task has received considerable
attention. Despite the success of KG embedding methods, they predominantly use
negative sampling, resulting in increased computational complexity as well as
biased predictions due to the closed world assumption. To overcome these
limitations, we propose \textbf{KG-NSF}, a negative sampling-free framework for
learning KG embeddings based on the cross-correlation matrices of embedding
vectors. It is shown that the proposed method achieves comparable link
prediction performance to negative sampling-based methods while converging much
faster.
Related papers
- Distill-SynthKG: Distilling Knowledge Graph Synthesis Workflow for Improved Coverage and Efficiency [59.6772484292295]
Knowledge graphs (KGs) generated by large language models (LLMs) are increasingly valuable for Retrieval-Augmented Generation (RAG) applications.
Existing KG extraction methods rely on prompt-based approaches, which are inefficient for processing large-scale corpora.
We propose SynthKG, a multi-step, document-level synthesis KG workflow based on LLMs.
We also design a novel graph-based retrieval framework for RAG.
arXiv Detail & Related papers (2024-10-22T00:47:54Z) - Unified Interpretation of Smoothing Methods for Negative Sampling Loss Functions in Knowledge Graph Embedding [31.26112477399022]
This paper provides theoretical interpretations of the smoothing methods for the Negative Sampling (NS) loss in Knowledge Graphs (KGs)
It induces a new NS loss, Triplet Adaptive Negative Sampling (TANS), that can cover the characteristics of the conventional smoothing methods.
arXiv Detail & Related papers (2024-07-05T04:38:17Z) - Exploring & Exploiting High-Order Graph Structure for Sparse Knowledge
Graph Completion [20.45256490854869]
We present a novel framework, LR-GCN, that is able to automatically capture valuable long-range dependency among entities.
The proposed approach comprises two main components: a GNN-based predictor and a reasoning path distiller.
arXiv Detail & Related papers (2023-06-29T15:35:34Z) - Normalizing Flow-based Neural Process for Few-Shot Knowledge Graph
Completion [69.55700751102376]
Few-shot knowledge graph completion (FKGC) aims to predict missing facts for unseen relations with few-shot associated facts.
Existing FKGC methods are based on metric learning or meta-learning, which often suffer from the out-of-distribution and overfitting problems.
In this paper, we propose a normalizing flow-based neural process for few-shot knowledge graph completion (NP-FKGC)
arXiv Detail & Related papers (2023-04-17T11:42:28Z) - Explainable Sparse Knowledge Graph Completion via High-order Graph
Reasoning Network [111.67744771462873]
This paper proposes a novel explainable model for sparse Knowledge Graphs (KGs)
It combines high-order reasoning into a graph convolutional network, namely HoGRN.
It can not only improve the generalization ability to mitigate the information insufficiency issue but also provide interpretability.
arXiv Detail & Related papers (2022-07-14T10:16:56Z) - Language Model-driven Negative Sampling [8.299192665823542]
Knowledge Graph Embeddings (KGEs) encode the entities and relations of a knowledge graph (KG) into a vector space with a purpose of representation learning and reasoning for an ultimate downstream task (i.e., link prediction, question answering)
Since KGEs follow closed-world assumption and assume all the present facts in KGs to be positive (correct), they also require negative samples as a counterpart for learning process for truthfulness test of existing triples.
We propose an approach for generating negative sampling considering the existing rich textual knowledge in KGs.
arXiv Detail & Related papers (2022-03-09T13:27:47Z) - KGBoost: A Classification-based Knowledge Base Completion Method with
Negative Sampling [29.14178162494542]
KGBoost is a new method to train a powerful classifier for missing link prediction.
We conduct experiments on multiple benchmark datasets, and demonstrate that KGBoost outperforms state-of-the-art methods across most datasets.
As compared with models trained by end-to-end optimization, KGBoost works well under the low-dimensional setting so as to allow a smaller model size.
arXiv Detail & Related papers (2021-12-17T06:19:37Z) - DSKReG: Differentiable Sampling on Knowledge Graph for Recommendation
with Relational GNN [59.160401038969795]
We propose differentiable sampling on Knowledge Graph for Recommendation with GNN (DSKReG)
We devise a differentiable sampling strategy, which enables the selection of relevant items to be jointly optimized with the model training procedure.
The experimental results demonstrate that our model outperforms state-of-the-art KG-based recommender systems.
arXiv Detail & Related papers (2021-08-26T16:19:59Z) - Efficient Non-Sampling Knowledge Graph Embedding [21.074002550338296]
We propose a new framework for KG embedding -- Efficient Non-Sampling Knowledge Graph Embedding (NS-KGE)
The basic idea is to consider all of the negative instances in the KG for model learning, and thus to avoid negative sampling.
Experiments on benchmark datasets show that our NS-KGE framework can achieve a better performance on efficiency and accuracy over traditional negative sampling based models.
arXiv Detail & Related papers (2021-04-21T23:36:39Z) - Cyclic Label Propagation for Graph Semi-supervised Learning [52.102251202186025]
We introduce a novel framework for graph semi-supervised learning called CycProp.
CycProp integrates GNNs into the process of label propagation in a cyclic and mutually reinforcing manner.
In particular, our proposed CycProp updates the node embeddings learned by GNN module with the augmented information by label propagation.
arXiv Detail & Related papers (2020-11-24T02:55:40Z) - Reinforced Negative Sampling over Knowledge Graph for Recommendation [106.07209348727564]
We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
arXiv Detail & Related papers (2020-03-12T12:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.