Negative Sampling in Knowledge Graph Representation Learning: A Review
- URL: http://arxiv.org/abs/2402.19195v1
- Date: Thu, 29 Feb 2024 14:26:20 GMT
- Title: Negative Sampling in Knowledge Graph Representation Learning: A Review
- Authors: Tiroshan Madushanka, Ryutaro Ichise
- Abstract summary: Knowledge graph representation learning (KGRL) or knowledge graph embedding (KGE) plays a crucial role in AI applications for knowledge construction and information exploration.
This paper systematically reviews various negative sampling (NS) methods and their contributions to the success of KGRL.
- Score: 3.1546318469750196
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Knowledge graph representation learning (KGRL) or knowledge graph embedding
(KGE) plays a crucial role in AI applications for knowledge construction and
information exploration. These models aim to encode entities and relations
present in a knowledge graph into a lower-dimensional vector space. During the
training process of KGE models, using positive and negative samples becomes
essential for discrimination purposes. However, obtaining negative samples
directly from existing knowledge graphs poses a challenge, emphasizing the need
for effective generation techniques. The quality of these negative samples
greatly impacts the accuracy of the learned embeddings, making their generation
a critical aspect of KGRL. This comprehensive survey paper systematically
reviews various negative sampling (NS) methods and their contributions to the
success of KGRL. Their respective advantages and disadvantages are outlined by
categorizing existing NS methods into five distinct categories. Moreover, this
survey identifies open research questions that serve as potential directions
for future investigations. By offering a generalization and alignment of
fundamental NS concepts, this survey provides valuable insights for designing
effective NS methods in the context of KGRL and serves as a motivating force
for further advancements in the field.
Related papers
- KGExplainer: Towards Exploring Connected Subgraph Explanations for Knowledge Graph Completion [18.497296711526268]
We present KGExplainer, a model-agnostic method that identifies connected subgraphs and distills an evaluator to assess them quantitatively.
Experiments on benchmark datasets demonstrate that KGExplainer achieves promising improvement and achieves an optimal ratio of 83.3% in human evaluation.
arXiv Detail & Related papers (2024-04-05T05:02:12Z) - Overcoming Pitfalls in Graph Contrastive Learning Evaluation: Toward
Comprehensive Benchmarks [60.82579717007963]
We introduce an enhanced evaluation framework designed to more accurately gauge the effectiveness, consistency, and overall capability of Graph Contrastive Learning (GCL) methods.
arXiv Detail & Related papers (2024-02-24T01:47:56Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - KGA: A General Machine Unlearning Framework Based on Knowledge Gap
Alignment [51.15802100354848]
We propose a general unlearning framework called KGA to induce forgetfulness.
Experiments on large-scale datasets show that KGA yields comprehensive improvements over baselines.
arXiv Detail & Related papers (2023-05-11T02:44:29Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Adversarial Robustness of Representation Learning for Knowledge Graphs [7.5765554531658665]
This thesis argues that state-of-the-art Knowledge Graph Embeddings (KGE) models are vulnerable to data poisoning attacks.
Two novel data poisoning attacks are proposed that craft input deletions or additions at training time to subvert the learned model's performance at inference time.
The evaluation shows that the simpler attacks are competitive with or outperform the computationally expensive ones.
arXiv Detail & Related papers (2022-09-30T22:41:22Z) - Language Model-driven Negative Sampling [8.299192665823542]
Knowledge Graph Embeddings (KGEs) encode the entities and relations of a knowledge graph (KG) into a vector space with a purpose of representation learning and reasoning for an ultimate downstream task (i.e., link prediction, question answering)
Since KGEs follow closed-world assumption and assume all the present facts in KGs to be positive (correct), they also require negative samples as a counterpart for learning process for truthfulness test of existing triples.
We propose an approach for generating negative sampling considering the existing rich textual knowledge in KGs.
arXiv Detail & Related papers (2022-03-09T13:27:47Z) - RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding [50.010601631982425]
This paper extends the random walk model (Arora et al., 2016a) of word embeddings to Knowledge Graph Embeddings (KGEs)
We derive a scoring function that evaluates the strength of a relation R between two entities h (head) and t (tail)
We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph.
arXiv Detail & Related papers (2021-01-25T13:31:29Z) - Quantifying Challenges in the Application of Graph Representation
Learning [0.0]
We provide an application oriented perspective to a set of popular embedding approaches.
We evaluate their representational power with respect to real-world graph properties.
Our results suggest that "one-to-fit-all" GRL approaches are hard to define in real-world scenarios.
arXiv Detail & Related papers (2020-06-18T03:19:43Z) - Reinforced Negative Sampling over Knowledge Graph for Recommendation [106.07209348727564]
We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
arXiv Detail & Related papers (2020-03-12T12:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.