Negative Sampling with Adaptive Denoising Mixup for Knowledge Graph
Embedding
- URL: http://arxiv.org/abs/2310.09781v1
- Date: Sun, 15 Oct 2023 09:01:24 GMT
- Title: Negative Sampling with Adaptive Denoising Mixup for Knowledge Graph
Embedding
- Authors: Xiangnan Chen, Wen Zhang, Zhen Yao, Mingyang Chen, Siliang Tang
- Abstract summary: Knowledge graph embedding (KGE) aims to map entities and relations of a knowledge graph (KG) into a low-dimensional and dense vector space via contrasting the positive and negative triples.
Negative sampling is essential to find high-quality negative triples since KGs only contain positive triples.
Most existing negative sampling methods assume that non-existent triples with high scores are high-quality negative triples.
- Score: 36.24764655034505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Knowledge graph embedding (KGE) aims to map entities and relations of a
knowledge graph (KG) into a low-dimensional and dense vector space via
contrasting the positive and negative triples. In the training process of KGEs,
negative sampling is essential to find high-quality negative triples since KGs
only contain positive triples. Most existing negative sampling methods assume
that non-existent triples with high scores are high-quality negative triples.
However, negative triples sampled by these methods are likely to contain noise.
Specifically, they ignore that non-existent triples with high scores might also
be true facts due to the incompleteness of KGs, which are usually called false
negative triples. To alleviate the above issue, we propose an easily pluggable
denoising mixup method called DeMix, which generates high-quality triples by
refining sampled negative triples in a self-supervised manner. Given a sampled
unlabeled triple, DeMix firstly classifies it into a marginal pseudo-negative
triple or a negative triple based on the judgment of the KGE model itself.
Secondly, it selects an appropriate mixup partner for the current triple to
synthesize a partially positive or a harder negative triple. Experimental
results on the knowledge graph completion task show that the proposed DeMix is
superior to other negative sampling techniques, ensuring corresponding KGEs a
faster convergence and better link prediction results.
Related papers
- Contrastive Learning with Negative Sampling Correction [52.990001829393506]
We propose a novel contrastive learning method named Positive-Unlabeled Contrastive Learning (PUCL)
PUCL treats the generated negative samples as unlabeled samples and uses information from positive samples to correct bias in contrastive loss.
PUCL can be applied to general contrastive learning problems and outperforms state-of-the-art methods on various image and graph classification tasks.
arXiv Detail & Related papers (2024-01-13T11:18:18Z) - DropMix: Better Graph Contrastive Learning with Harder Negative Samples [6.242575753642188]
Mixup has been introduced to synthesize hard negative samples in graph contrastive learning (GCL)
We propose a novel method DropMix to synthesize harder negative samples.
arXiv Detail & Related papers (2023-10-15T07:45:30Z) - Your Negative May not Be True Negative: Boosting Image-Text Matching
with False Negative Elimination [62.18768931714238]
We propose a novel False Negative Elimination (FNE) strategy to select negatives via sampling.
The results demonstrate the superiority of our proposed false negative elimination strategy.
arXiv Detail & Related papers (2023-08-08T16:31:43Z) - HaSa: Hardness and Structure-Aware Contrastive Knowledge Graph Embedding [2.395887395376882]
We consider a contrastive learning approach to knowledge graph embedding (KGE) via InfoNCE.
We argue that the generation of high-quality (i.e., hard) negative triples might lead to an increase in false negative triples.
To mitigate the impact of false negative triples during the generation of hard negative triples, we propose the Hardness and Structure-aware (textbfHaSa) contrastive KGE method.
arXiv Detail & Related papers (2023-05-17T20:46:18Z) - Entity Aware Negative Sampling with Auxiliary Loss of False Negative
Prediction for Knowledge Graph Embedding [0.0]
We propose a novel method called Entity Aware Negative Sampling (EANS)
EANS is able to sample negative entities resemble to positive one by adopting Gaussian distribution to the aligned entity index space.
The proposed method can generate high-quality negative samples regardless of negative sample size and effectively mitigate the influence of false negative samples.
arXiv Detail & Related papers (2022-10-12T14:27:51Z) - MixKG: Mixing for harder negative samples in knowledge graph [33.4379457065033]
Knowledge graph embedding(KGE) aims to represent entities and relations into low-dimensional vectors for many real-world applications.
We introduce an inexpensive but effective method called MixKG to generate harder negative samples for knowledge graphs.
Experiments on two public datasets and four classical KGE methods show MixKG is superior to previous negative sampling algorithms.
arXiv Detail & Related papers (2022-02-19T13:31:06Z) - Contrastive Attraction and Contrastive Repulsion for Representation
Learning [131.72147978462348]
Contrastive learning (CL) methods learn data representations in a self-supervision manner, where the encoder contrasts each positive sample over multiple negative samples.
Recent CL methods have achieved promising results when pretrained on large-scale datasets, such as ImageNet.
We propose a doubly CL strategy that separately compares positive and negative samples within their own groups, and then proceeds with a contrast between positive and negative groups.
arXiv Detail & Related papers (2021-05-08T17:25:08Z) - Understanding Negative Sampling in Graph Representation Learning [87.35038268508414]
We show that negative sampling is as important as positive sampling in determining the optimization objective and the resulted variance.
We propose Metropolis-Hastings (MCNS) to approximate the positive distribution with self-contrast approximation and accelerate negative sampling by Metropolis-Hastings.
We evaluate our method on 5 datasets that cover extensive downstream graph learning tasks, including link prediction, node classification and personalized recommendation.
arXiv Detail & Related papers (2020-05-20T06:25:21Z) - Reinforced Negative Sampling over Knowledge Graph for Recommendation [106.07209348727564]
We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
arXiv Detail & Related papers (2020-03-12T12:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.