Towards Reliable Negative Sampling for Recommendation with Implicit Feedback via In-Community Popularity
- URL: http://arxiv.org/abs/2602.18759v1
- Date: Sat, 21 Feb 2026 08:53:10 GMT
- Title: Towards Reliable Negative Sampling for Recommendation with Implicit Feedback via In-Community Popularity
- Authors: Chen Chen, Haobo Lin, Yuanbo Xu,
- Abstract summary: We propose textbfICPNS (In-Community Popularity Negative Sampling) to identify reliable and informative negative samples.<n>Our approach is grounded in the insight that item exposure is driven by latent user communities.<n>ICPNS yields consistent improvements on graph-based recommenders and competitive performance on MF-based models.
- Score: 8.257297407777555
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Learning from implicit feedback is a fundamental problem in modern recommender systems, where only positive interactions are observed and explicit negative signals are unavailable. In such settings, negative sampling plays a critical role in model training by constructing negative items that enable effective preference learning and ranking optimization. However, designing reliable negative sampling strategies remains challenging, as they must simultaneously ensure realness, hardness, and interpretability. To this end, we propose \textbf{ICPNS (In-Community Popularity Negative Sampling)}, a novel framework that leverages user community structure to identify reliable and informative negative samples. Our approach is grounded in the insight that item exposure is driven by latent user communities. By identifying these communities and utilizing in-community popularity, ICPNS effectively approximates the probability of item exposure. Consequently, items that are popular within a user's community but remain unclicked are identified as more reliable true negatives. Extensive experiments on four benchmark datasets demonstrate that ICPNS yields consistent improvements on graph-based recommenders and competitive performance on MF-based models, outperforming representative negative sampling strategies under a unified evaluation protocol.
Related papers
- A Topology-Aware Positive Sample Set Construction and Feature Optimization Method in Implicit Collaborative Filtering [40.89512526196666]
Negative sampling strategies are widely used in implicit collaborative filtering to address issues like data sparsity and class imbalance.<n>These strategies often introduce false negatives, hindering the model's ability to accurately learn users' latent preferences.<n>We propose a Topology-aware Positive Sample Set Construction and Feature optimization method (TPSC-FO)
arXiv Detail & Related papers (2026-02-20T15:35:48Z) - A Simple yet Effective Negative Sampling Plugin for Constructing Positive Sample Pairs in Implicit Collaborative Filtering [40.89512526196666]
PSP-NS is a negative sampling plugin for collaborative filtering.<n>It builds a user-item bipartite graph with edge weights indicating interaction confidence.<n>It generates positive sample pairs via replication-based reweighting to strengthen positive signals.<n> PSP-NS boosts Recall@30 and Precision@30 by 32.11% and 22.90% on Yelp over the strongest baselines.
arXiv Detail & Related papers (2026-02-20T13:34:43Z) - Improving LLM-based Recommendation with Self-Hard Negatives from Intermediate Layers [80.55429742713623]
ILRec is a novel preference fine-tuning framework for LLM-based recommender systems.<n>We introduce a lightweight collaborative filtering model to assign token-level rewards for negative signals.<n>Experiments on three datasets demonstrate ILRec's effectiveness in enhancing the performance of LLM-based recommender systems.
arXiv Detail & Related papers (2026-02-19T14:37:43Z) - CoNRec: Context-Discerning Negative Recommendation with LLMs [5.832474387562381]
Research into users' negative preferences has gained increasing importance in modern recommendation systems.<n>Most existing approaches primarily use negative feedback as an auxiliary signal to enhance positive recommendations.<n>We propose the first large language model framework for negative feedback modeling with special designed context-discerning modules.
arXiv Detail & Related papers (2026-01-22T07:46:18Z) - Rethinking Sample Polarity in Reinforcement Learning with Verifiable Rewards [57.11130904745293]
We investigate how sample polarities affect RLVR training dynamics and behaviors.<n>We find that positive samples sharpen existing correct reasoning patterns, while negative samples encourage exploration of new reasoning paths.<n>We propose an Adaptive and Asymmetric token-level Advantage shaping method for Policy Optimization.
arXiv Detail & Related papers (2025-12-25T11:15:46Z) - Mitigating Pooling Bias in E-commerce Search via False Negative Estimation [25.40402675846542]
Bias-mitigating Hard Negative Sampling is a novel negative sampling strategy tailored to identify and adjust for false negatives.
Our experiments in the search setting confirm BHNS as effective for practical e-commerce use.
arXiv Detail & Related papers (2023-11-11T00:22:57Z) - Learning from Negative User Feedback and Measuring Responsiveness for
Sequential Recommenders [13.762960304406016]
We introduce explicit and implicit negative user feedback into the training objective of sequential recommenders.
We demonstrate the effectiveness of this approach using live experiments on a large-scale industrial recommender system.
arXiv Detail & Related papers (2023-08-23T17:16:07Z) - Generating Negative Samples for Sequential Recommendation [83.60655196391855]
We propose to Generate Negative Samples (items) for Sequential Recommendation (SR)
A negative item is sampled at each time step based on the current SR model's learned user preferences toward items.
Experiments on four public datasets verify the importance of providing high-quality negative samples for SR.
arXiv Detail & Related papers (2022-08-07T05:44:13Z) - FedCL: Federated Contrastive Learning for Privacy-Preserving
Recommendation [98.5705258907774]
FedCL can exploit high-quality negative samples for effective model training with privacy well protected.
We first infer user embeddings from local user data through the local model on each client, and then perturb them with local differential privacy (LDP)
Since individual user embedding contains heavy noise due to LDP, we propose to cluster user embeddings on the server to mitigate the influence of noise.
arXiv Detail & Related papers (2022-04-21T02:37:10Z) - Understanding Negative Sampling in Graph Representation Learning [87.35038268508414]
We show that negative sampling is as important as positive sampling in determining the optimization objective and the resulted variance.
We propose Metropolis-Hastings (MCNS) to approximate the positive distribution with self-contrast approximation and accelerate negative sampling by Metropolis-Hastings.
We evaluate our method on 5 datasets that cover extensive downstream graph learning tasks, including link prediction, node classification and personalized recommendation.
arXiv Detail & Related papers (2020-05-20T06:25:21Z) - Reinforced Negative Sampling over Knowledge Graph for Recommendation [106.07209348727564]
We develop a new negative sampling model, Knowledge Graph Policy Network (kgPolicy), which works as a reinforcement learning agent to explore high-quality negatives.
kgPolicy navigates from the target positive interaction, adaptively receives knowledge-aware negative signals, and ultimately yields a potential negative item to train the recommender.
arXiv Detail & Related papers (2020-03-12T12:44:30Z) - Binary Classification from Positive Data with Skewed Confidence [85.18941440826309]
Positive-confidence (Pconf) classification is a promising weakly-supervised learning method.
In practice, the confidence may be skewed by bias arising in an annotation process.
We introduce the parameterized model of the skewed confidence, and propose the method for selecting the hyper parameter.
arXiv Detail & Related papers (2020-01-29T00:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.