Entity Aware Negative Sampling with Auxiliary Loss of False Negative
Prediction for Knowledge Graph Embedding
- URL: http://arxiv.org/abs/2210.06242v1
- Date: Wed, 12 Oct 2022 14:27:51 GMT
- Title: Entity Aware Negative Sampling with Auxiliary Loss of False Negative
Prediction for Knowledge Graph Embedding
- Authors: Sang-Hyun Je
- Abstract summary: We propose a novel method called Entity Aware Negative Sampling (EANS)
EANS is able to sample negative entities resemble to positive one by adopting Gaussian distribution to the aligned entity index space.
The proposed method can generate high-quality negative samples regardless of negative sample size and effectively mitigate the influence of false negative samples.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graph (KG) embedding is widely used in many downstream applications
using KGs. Generally, since KGs contain only ground truth triples, it is
necessary to construct arbitrary negative samples for representation learning
of KGs. Recently, various methods for sampling high-quality negatives have been
studied because the quality of negative triples has great effect on KG
embedding. In this paper, we propose a novel method called Entity Aware
Negative Sampling (EANS), which is able to sample negative entities resemble to
positive one by adopting Gaussian distribution to the aligned entity index
space. Additionally, we introduce auxiliary loss for false negative prediction
that can alleviate the impact of the sampled false negative triples. The
proposed method can generate high-quality negative samples regardless of
negative sample size and effectively mitigate the influence of false negative
samples. The experimental results on standard benchmarks show that our EANS
outperforms existing the state-of-the-art methods of negative sampling on
several knowledge graph embedding models. Moreover, the proposed method
achieves competitive performance even when the number of negative samples is
limited to only one.
Related papers
- From Overfitting to Robustness: Quantity, Quality, and Variety Oriented Negative Sample Selection in Graph Contrastive Learning [38.87932592059369]
Graph contrastive learning (GCL) aims to contrast positive-negative counterparts to learn the node embeddings.
The variation, quantity, and quality of negative samples compared to positive samples play crucial roles in learning meaningful embeddings for node classification downstream tasks.
This study proposes a novel Cumulative Sample Selection (CSS) algorithm by comprehensively considering negative samples' quality, variations, and quantity.
arXiv Detail & Related papers (2024-06-21T10:47:26Z) - Contrastive Learning with Negative Sampling Correction [52.990001829393506]
We propose a novel contrastive learning method named Positive-Unlabeled Contrastive Learning (PUCL)
PUCL treats the generated negative samples as unlabeled samples and uses information from positive samples to correct bias in contrastive loss.
PUCL can be applied to general contrastive learning problems and outperforms state-of-the-art methods on various image and graph classification tasks.
arXiv Detail & Related papers (2024-01-13T11:18:18Z) - Your Negative May not Be True Negative: Boosting Image-Text Matching
with False Negative Elimination [62.18768931714238]
We propose a novel False Negative Elimination (FNE) strategy to select negatives via sampling.
The results demonstrate the superiority of our proposed false negative elimination strategy.
arXiv Detail & Related papers (2023-08-08T16:31:43Z) - SimANS: Simple Ambiguous Negatives Sampling for Dense Text Retrieval [126.22182758461244]
We show that according to the measured relevance scores, the negatives ranked around the positives are generally more informative and less likely to be false negatives.
We propose a simple ambiguous negatives sampling method, SimANS, which incorporates a new sampling probability distribution to sample more ambiguous negatives.
arXiv Detail & Related papers (2022-10-21T07:18:05Z) - Negative Sampling for Recommendation [7.758275614033198]
How to effectively sample high-quality negative instances is important for well training a recommendation model.
We argue that a high-quality negative should be both textitinformativeness and textitunbiasedness
arXiv Detail & Related papers (2022-04-02T09:50:19Z) - Language Model-driven Negative Sampling [8.299192665823542]
Knowledge Graph Embeddings (KGEs) encode the entities and relations of a knowledge graph (KG) into a vector space with a purpose of representation learning and reasoning for an ultimate downstream task (i.e., link prediction, question answering)
Since KGEs follow closed-world assumption and assume all the present facts in KGs to be positive (correct), they also require negative samples as a counterpart for learning process for truthfulness test of existing triples.
We propose an approach for generating negative sampling considering the existing rich textual knowledge in KGs.
arXiv Detail & Related papers (2022-03-09T13:27:47Z) - MixKG: Mixing for harder negative samples in knowledge graph [33.4379457065033]
Knowledge graph embedding(KGE) aims to represent entities and relations into low-dimensional vectors for many real-world applications.
We introduce an inexpensive but effective method called MixKG to generate harder negative samples for knowledge graphs.
Experiments on two public datasets and four classical KGE methods show MixKG is superior to previous negative sampling algorithms.
arXiv Detail & Related papers (2022-02-19T13:31:06Z) - Rethinking InfoNCE: How Many Negative Samples Do You Need? [54.146208195806636]
We study how many negative samples are optimal for InfoNCE in different scenarios via a semi-quantitative theoretical framework.
We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function.
arXiv Detail & Related papers (2021-05-27T08:38:29Z) - Understanding Negative Sampling in Graph Representation Learning [87.35038268508414]
We show that negative sampling is as important as positive sampling in determining the optimization objective and the resulted variance.
We propose Metropolis-Hastings (MCNS) to approximate the positive distribution with self-contrast approximation and accelerate negative sampling by Metropolis-Hastings.
We evaluate our method on 5 datasets that cover extensive downstream graph learning tasks, including link prediction, node classification and personalized recommendation.
arXiv Detail & Related papers (2020-05-20T06:25:21Z) - Reinforced Negative Sampling over Knowledge Graph for Recommendation [106.07209348727564]
We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
arXiv Detail & Related papers (2020-03-12T12:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.