Generating Negative Samples for Sequential Recommendation
- URL: http://arxiv.org/abs/2208.03645v1
- Date: Sun, 7 Aug 2022 05:44:13 GMT
- Title: Generating Negative Samples for Sequential Recommendation
- Authors: Yongjun Chen, Jia Li, Zhiwei Liu, Nitish Shirish Keskar, Huan Wang,
Julian McAuley, Caiming Xiong
- Abstract summary: We propose to Generate Negative Samples (items) for Sequential Recommendation (SR)
A negative item is sampled at each time step based on the current SR model's learned user preferences toward items.
Experiments on four public datasets verify the importance of providing high-quality negative samples for SR.
- Score: 83.60655196391855
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To make Sequential Recommendation (SR) successful, recent works focus on
designing effective sequential encoders, fusing side information, and mining
extra positive self-supervision signals. The strategy of sampling negative
items at each time step is less explored. Due to the dynamics of users'
interests and model updates during training, considering randomly sampled items
from a user's non-interacted item set as negatives can be uninformative. As a
result, the model will inaccurately learn user preferences toward items.
Identifying informative negatives is challenging because informative negative
items are tied with both dynamically changed interests and model parameters
(and sampling process should also be efficient). To this end, we propose to
Generate Negative Samples (items) for SR (GenNi). A negative item is sampled at
each time step based on the current SR model's learned user preferences toward
items. An efficient implementation is proposed to further accelerate the
generation process, making it scalable to large-scale recommendation tasks.
Extensive experiments on four public datasets verify the importance of
providing high-quality negative samples for SR and demonstrate the
effectiveness and efficiency of GenNi.
Related papers
- Curriculum Negative Mining For Temporal Networks [33.70909189731187]
Temporal networks are effective in capturing the evolving interactions of networks over time.
CurNM is a model-aware curriculum learning framework that adaptively adjusts the difficulty of negative samples.
Our method outperforms baseline methods by a significant margin.
arXiv Detail & Related papers (2024-07-24T07:55:49Z) - Better Sampling of Negatives for Distantly Supervised Named Entity
Recognition [39.264878763160766]
We propose a simple and straightforward approach for selecting the top negative samples that have high similarities with all the positive samples for training.
Our method achieves consistent performance improvements on four distantly supervised NER datasets.
arXiv Detail & Related papers (2023-05-22T15:35:39Z) - SimANS: Simple Ambiguous Negatives Sampling for Dense Text Retrieval [126.22182758461244]
We show that according to the measured relevance scores, the negatives ranked around the positives are generally more informative and less likely to be false negatives.
We propose a simple ambiguous negatives sampling method, SimANS, which incorporates a new sampling probability distribution to sample more ambiguous negatives.
arXiv Detail & Related papers (2022-10-21T07:18:05Z) - ELECRec: Training Sequential Recommenders as Discriminators [94.93227906678285]
Sequential recommendation is often considered as a generative task, i.e., training a sequential encoder to generate the next item of a user's interests.
We propose to train the sequential recommenders as discriminators rather than generators.
Our method trains a discriminator to distinguish if a sampled item is a'real' target item or not.
arXiv Detail & Related papers (2022-04-05T06:19:45Z) - Rethinking InfoNCE: How Many Negative Samples Do You Need? [54.146208195806636]
We study how many negative samples are optimal for InfoNCE in different scenarios via a semi-quantitative theoretical framework.
We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function.
arXiv Detail & Related papers (2021-05-27T08:38:29Z) - Negative Data Augmentation [127.28042046152954]
We show that negative data augmentation samples provide information on the support of the data distribution.
We introduce a new GAN training objective where we use NDA as an additional source of synthetic data for the discriminator.
Empirically, models trained with our method achieve improved conditional/unconditional image generation along with improved anomaly detection capabilities.
arXiv Detail & Related papers (2021-02-09T20:28:35Z) - Sampler Design for Implicit Feedback Data by Noisy-label Robust Learning [32.76804332450971]
We design an adaptive sampler based on noisy-label robust learning for implicit feedback data.
We predict users' preferences with the model and learn it by maximizing likelihood of observed data labels.
We then consider the risk of these noisy labels, and propose a Noisy-label Robust BPO.
arXiv Detail & Related papers (2020-06-28T05:31:53Z) - Reinforced Negative Sampling over Knowledge Graph for Recommendation [106.07209348727564]
We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
arXiv Detail & Related papers (2020-03-12T12:44:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.