Diverse Negative Sampling for Implicit Collaborative Filtering
- URL: http://arxiv.org/abs/2508.14468v1
- Date: Wed, 20 Aug 2025 06:48:54 GMT
- Title: Diverse Negative Sampling for Implicit Collaborative Filtering
- Authors: Yueqing Xuan, Kacper Sokol, Mark Sanderson, Jeffrey Chan,
- Abstract summary: We propose Diverse Negative Sampling (DivNS) to account for diversity in negative training data.<n>DivNS finds hard negative items with large preference scores and constructs user-specific caches.<n>It selects a diverse subset of negative items from the cache while ensuring dissimilarity from the user's hard negatives.<n>The resulting synthetic negatives are both informative and diverse, enabling recommenders to learn a broader item space and improve their generalisability.
- Score: 26.57812122315108
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Implicit collaborative filtering recommenders are usually trained to learn user positive preferences. Negative sampling, which selects informative negative items to form negative training data, plays a crucial role in this process. Since items are often clustered in the latent space, existing negative sampling strategies normally oversample negative items from the dense regions. This leads to homogeneous negative data and limited model expressiveness. In this paper, we propose Diverse Negative Sampling (DivNS), a novel approach that explicitly accounts for diversity in negative training data during the negative sampling process. DivNS first finds hard negative items with large preference scores and constructs user-specific caches that store unused but highly informative negative samples. Then, its diversity-augmented sampler selects a diverse subset of negative items from the cache while ensuring dissimilarity from the user's hard negatives. Finally, a synthetic negatives generator combines the selected diverse negatives with hard negatives to form more effective training data. The resulting synthetic negatives are both informative and diverse, enabling recommenders to learn a broader item space and improve their generalisability. Extensive experiments on four public datasets demonstrate the effectiveness of DivNS in improving recommendation quality while maintaining computational efficiency.
Related papers
- SyNeg: LLM-Driven Synthetic Hard-Negatives for Dense Retrieval [45.971786380884126]
The performance of Dense retrieval (DR) is significantly influenced by the quality of negative sampling.<n>Recent advancements in large language models (LLMs) offer an innovative solution by generating contextually rich and diverse negative samples.<n>In this work, we present a framework that harnesses LLMs to synthesize high-quality hard negative samples.
arXiv Detail & Related papers (2024-12-23T03:49:00Z) - Contrastive Learning with Negative Sampling Correction [52.990001829393506]
We propose a novel contrastive learning method named Positive-Unlabeled Contrastive Learning (PUCL)
PUCL treats the generated negative samples as unlabeled samples and uses information from positive samples to correct bias in contrastive loss.
PUCL can be applied to general contrastive learning problems and outperforms state-of-the-art methods on various image and graph classification tasks.
arXiv Detail & Related papers (2024-01-13T11:18:18Z) - Your Negative May not Be True Negative: Boosting Image-Text Matching
with False Negative Elimination [62.18768931714238]
We propose a novel False Negative Elimination (FNE) strategy to select negatives via sampling.
The results demonstrate the superiority of our proposed false negative elimination strategy.
arXiv Detail & Related papers (2023-08-08T16:31:43Z) - Better Sampling of Negatives for Distantly Supervised Named Entity
Recognition [39.264878763160766]
We propose a simple and straightforward approach for selecting the top negative samples that have high similarities with all the positive samples for training.
Our method achieves consistent performance improvements on four distantly supervised NER datasets.
arXiv Detail & Related papers (2023-05-22T15:35:39Z) - SimANS: Simple Ambiguous Negatives Sampling for Dense Text Retrieval [126.22182758461244]
We show that according to the measured relevance scores, the negatives ranked around the positives are generally more informative and less likely to be false negatives.
We propose a simple ambiguous negatives sampling method, SimANS, which incorporates a new sampling probability distribution to sample more ambiguous negatives.
arXiv Detail & Related papers (2022-10-21T07:18:05Z) - Generating Negative Samples for Sequential Recommendation [83.60655196391855]
We propose to Generate Negative Samples (items) for Sequential Recommendation (SR)
A negative item is sampled at each time step based on the current SR model's learned user preferences toward items.
Experiments on four public datasets verify the importance of providing high-quality negative samples for SR.
arXiv Detail & Related papers (2022-08-07T05:44:13Z) - Negative Sampling for Recommendation [7.758275614033198]
How to effectively sample high-quality negative instances is important for well training a recommendation model.
We argue that a high-quality negative should be both textitinformativeness and textitunbiasedness
arXiv Detail & Related papers (2022-04-02T09:50:19Z) - Rethinking InfoNCE: How Many Negative Samples Do You Need? [54.146208195806636]
We study how many negative samples are optimal for InfoNCE in different scenarios via a semi-quantitative theoretical framework.
We estimate the optimal negative sampling ratio using the $K$ value that maximizes the training effectiveness function.
arXiv Detail & Related papers (2021-05-27T08:38:29Z) - Contrastive Learning with Hard Negative Samples [80.12117639845678]
We develop a new family of unsupervised sampling methods for selecting hard negative samples.
A limiting case of this sampling results in a representation that tightly clusters each class, and pushes different classes as far apart as possible.
The proposed method improves downstream performance across multiple modalities, requires only few additional lines of code to implement, and introduces no computational overhead.
arXiv Detail & Related papers (2020-10-09T14:18:53Z) - Simplify and Robustify Negative Sampling for Implicit Collaborative
Filtering [42.832851785261894]
In this paper, we first provide a novel understanding of negative instances by empirically observing that only a few instances are potentially important for model learning.
We then tackle the untouched false negative problem by favouring high-variance samples stored in memory.
Empirical results on two synthetic datasets and three real-world datasets demonstrate both robustness and superiorities of our negative sampling method.
arXiv Detail & Related papers (2020-09-07T19:08:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.