Negative Sampling for Recommendation
- URL: http://arxiv.org/abs/2204.06520v1
- Date: Sat, 2 Apr 2022 09:50:19 GMT
- Title: Negative Sampling for Recommendation
- Authors: Bin Liu and Bang Wang
- Abstract summary: How to effectively sample high-quality negative instances is important for well training a recommendation model.
We argue that a high-quality negative should be both textitinformativeness and textitunbiasedness
- Score: 7.758275614033198
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: How to effectively sample high-quality negative instances is important for
well training a recommendation model. We argue that a high-quality negative
should be both \textit{informativeness} and \textit{unbiasedness}. Although
previous studies have proposed some approaches to address the informativeness
in negative sampling, few has been done to discriminating false negative from
true negative for unbiased negative sampling, not to mention taking both into
consideration. This paper first adopts a parameter learning perspective to
analyze negative informativeness and unbiasedness in loss gradient-based model
training. We argue that both negative sampling and collaborative filtering
include an implicit task of negative classification, from which we report an
insightful yet beneficial finding about the order relation in predicted
negatives' scores. Based on our finding and by regarding negatives as random
variables, we next derive the class condition density of true negatives and
that of false negatives. We also design a Bayesian classifier for negative
classification, from which we define a quantitative unbiasedness measure for
negatives. Finally, we propose to use a harmonic mean of informativeness and
unbiasedness to sample high-quality negatives. Experimental studies validate
the superiority of our negative sampling algorithm over the peers in terms of
better sampling quality and better recommendation performance.
Related papers
- Contrastive Learning with Negative Sampling Correction [52.990001829393506]
We propose a novel contrastive learning method named Positive-Unlabeled Contrastive Learning (PUCL)
PUCL treats the generated negative samples as unlabeled samples and uses information from positive samples to correct bias in contrastive loss.
PUCL can be applied to general contrastive learning problems and outperforms state-of-the-art methods on various image and graph classification tasks.
arXiv Detail & Related papers (2024-01-13T11:18:18Z) - Your Negative May not Be True Negative: Boosting Image-Text Matching
with False Negative Elimination [62.18768931714238]
We propose a novel False Negative Elimination (FNE) strategy to select negatives via sampling.
The results demonstrate the superiority of our proposed false negative elimination strategy.
arXiv Detail & Related papers (2023-08-08T16:31:43Z) - SimANS: Simple Ambiguous Negatives Sampling for Dense Text Retrieval [126.22182758461244]
We show that according to the measured relevance scores, the negatives ranked around the positives are generally more informative and less likely to be false negatives.
We propose a simple ambiguous negatives sampling method, SimANS, which incorporates a new sampling probability distribution to sample more ambiguous negatives.
arXiv Detail & Related papers (2022-10-21T07:18:05Z) - Entity Aware Negative Sampling with Auxiliary Loss of False Negative
Prediction for Knowledge Graph Embedding [0.0]
We propose a novel method called Entity Aware Negative Sampling (EANS)
EANS is able to sample negative entities resemble to positive one by adopting Gaussian distribution to the aligned entity index space.
The proposed method can generate high-quality negative samples regardless of negative sample size and effectively mitigate the influence of false negative samples.
arXiv Detail & Related papers (2022-10-12T14:27:51Z) - Rethinking InfoNCE: How Many Negative Samples Do You Need? [54.146208195806636]
We study how many negative samples are optimal for InfoNCE in different scenarios via a semi-quantitative theoretical framework.
We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function.
arXiv Detail & Related papers (2021-05-27T08:38:29Z) - Understanding Negative Sampling in Graph Representation Learning [87.35038268508414]
We show that negative sampling is as important as positive sampling in determining the optimization objective and the resulted variance.
We propose Metropolis-Hastings (MCNS) to approximate the positive distribution with self-contrast approximation and accelerate negative sampling by Metropolis-Hastings.
We evaluate our method on 5 datasets that cover extensive downstream graph learning tasks, including link prediction, node classification and personalized recommendation.
arXiv Detail & Related papers (2020-05-20T06:25:21Z) - Reinforced Negative Sampling over Knowledge Graph for Recommendation [106.07209348727564]
We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
arXiv Detail & Related papers (2020-03-12T12:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.