Negative Sampling in Knowledge Graph Representation Learning: A Review
- URL: http://arxiv.org/abs/2402.19195v1
- Date: Thu, 29 Feb 2024 14:26:20 GMT
- Title: Negative Sampling in Knowledge Graph Representation Learning: A Review
- Authors: Tiroshan Madushanka, Ryutaro Ichise
- Abstract summary: Knowledge graph representation learning (KGRL) or knowledge graph embedding (KGE) plays a crucial role in AI applications for knowledge construction and information exploration.
This paper systematically reviews various negative sampling (NS) methods and their contributions to the success of KGRL.
- Score: 3.1546318469750196
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Knowledge graph representation learning (KGRL) or knowledge graph embedding
(KGE) plays a crucial role in AI applications for knowledge construction and
information exploration. These models aim to encode entities and relations
present in a knowledge graph into a lower-dimensional vector space. During the
training process of KGE models, using positive and negative samples becomes
essential for discrimination purposes. However, obtaining negative samples
directly from existing knowledge graphs poses a challenge, emphasizing the need
for effective generation techniques. The quality of these negative samples
greatly impacts the accuracy of the learned embeddings, making their generation
a critical aspect of KGRL. This comprehensive survey paper systematically
reviews various negative sampling (NS) methods and their contributions to the
success of KGRL. Their respective advantages and disadvantages are outlined by
categorizing existing NS methods into five distinct categories. Moreover, this
survey identifies open research questions that serve as potential directions
for future investigations. By offering a generalization and alignment of
fundamental NS concepts, this survey provides valuable insights for designing
effective NS methods in the context of KGRL and serves as a motivating force
for further advancements in the field.
Related papers
- Resilience in Knowledge Graph Embeddings [1.90894751866253]
We give a unified definition of resilience, encompassing several factors such as generalisation, performance consistency, distribution adaption, and robustness.
Our survey results show that most of the existing works focus on a specific aspect of resilience, namely robustness.
arXiv Detail & Related papers (2024-10-28T16:04:22Z) - A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation [59.14165404728197]
We provide an up-to-date and forward-looking review of deep graph learning under distribution shifts.
Specifically, we cover three primary scenarios: graph OOD generalization, training-time graph OOD adaptation, and test-time graph OOD adaptation.
To provide a better understanding of the literature, we systematically categorize the existing models based on our proposed taxonomy.
arXiv Detail & Related papers (2024-10-25T02:39:56Z) - Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - KGExplainer: Towards Exploring Connected Subgraph Explanations for Knowledge Graph Completion [18.497296711526268]
We present KGExplainer, a model-agnostic method that identifies connected subgraphs and distills an evaluator to assess them quantitatively.
Experiments on benchmark datasets demonstrate that KGExplainer achieves promising improvement and achieves an optimal ratio of 83.3% in human evaluation.
arXiv Detail & Related papers (2024-04-05T05:02:12Z) - GOODAT: Towards Test-time Graph Out-of-Distribution Detection [103.40396427724667]
Graph neural networks (GNNs) have found widespread application in modeling graph data across diverse domains.
Recent studies have explored graph OOD detection, often focusing on training a specific model or modifying the data on top of a well-trained GNN.
This paper introduces a data-centric, unsupervised, and plug-and-play solution that operates independently of training data and modifications of GNN architecture.
arXiv Detail & Related papers (2024-01-10T08:37:39Z) - Energy-based Out-of-Distribution Detection for Graph Neural Networks [76.0242218180483]
We propose a simple, powerful and efficient OOD detection model for GNN-based learning on graphs, which we call GNNSafe.
GNNSafe achieves up to $17.0%$ AUROC improvement over state-of-the-arts and it could serve as simple yet strong baselines in such an under-developed area.
arXiv Detail & Related papers (2023-02-06T16:38:43Z) - Adversarial Robustness of Representation Learning for Knowledge Graphs [7.5765554531658665]
This thesis argues that state-of-the-art Knowledge Graph Embeddings (KGE) models are vulnerable to data poisoning attacks.
Two novel data poisoning attacks are proposed that craft input deletions or additions at training time to subvert the learned model's performance at inference time.
The evaluation shows that the simpler attacks are competitive with or outperform the computationally expensive ones.
arXiv Detail & Related papers (2022-09-30T22:41:22Z) - An Empirical Investigation of Commonsense Self-Supervision with
Knowledge Graphs [67.23285413610243]
Self-supervision based on the information extracted from large knowledge graphs has been shown to improve the generalization of language models.
We study the effect of knowledge sampling strategies and sizes that can be used to generate synthetic data for adapting language models.
arXiv Detail & Related papers (2022-05-21T19:49:04Z) - RelWalk A Latent Variable Model Approach to Knowledge Graph Embedding [50.010601631982425]
This paper extends the random walk model (Arora et al., 2016a) of word embeddings to Knowledge Graph Embeddings (KGEs)
We derive a scoring function that evaluates the strength of a relation R between two entities h (head) and t (tail)
We propose a learning objective motivated by the theoretical analysis to learn KGEs from a given knowledge graph.
arXiv Detail & Related papers (2021-01-25T13:31:29Z) - Quantifying Challenges in the Application of Graph Representation
Learning [0.0]
We provide an application oriented perspective to a set of popular embedding approaches.
We evaluate their representational power with respect to real-world graph properties.
Our results suggest that "one-to-fit-all" GRL approaches are hard to define in real-world scenarios.
arXiv Detail & Related papers (2020-06-18T03:19:43Z) - Reinforced Negative Sampling over Knowledge Graph for Recommendation [106.07209348727564]
We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
arXiv Detail & Related papers (2020-03-12T12:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.