Benefiting from Negative yet Informative Feedback by Contrasting Opposing Sequential Patterns
- URL: http://arxiv.org/abs/2508.14786v1
- Date: Wed, 20 Aug 2025 15:32:16 GMT
- Title: Benefiting from Negative yet Informative Feedback by Contrasting Opposing Sequential Patterns
- Authors: Veronika Ivanova, Evgeny Frolov, Alexey Vasilev,
- Abstract summary: We consider the task of learning from both positive and negative feedback in a sequential recommendation scenario.<n>In this work, we propose to train two transformer encoders on separate positive and negative interaction sequences.<n>We demonstrate the effectiveness of this approach in terms of increasing true-positive metrics compared to state-of-the-art sequential recommendation methods.
- Score: 1.6044444452278062
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the task of learning from both positive and negative feedback in a sequential recommendation scenario, as both types of feedback are often present in user interactions. Meanwhile, conventional sequential learning models usually focus on considering and predicting positive interactions, ignoring that reducing items with negative feedback in recommendations improves user satisfaction with the service. Moreover, the negative feedback can potentially provide a useful signal for more accurate identification of true user interests. In this work, we propose to train two transformer encoders on separate positive and negative interaction sequences. We incorporate both types of feedback into the training objective of the sequential recommender using a composite loss function that includes positive and negative cross-entropy as well as a cleverly crafted contrastive term, that helps better modeling opposing patterns. We demonstrate the effectiveness of this approach in terms of increasing true-positive metrics compared to state-of-the-art sequential recommendation methods while reducing the number of wrongly promoted negative items.
Related papers
- CoNRec: Context-Discerning Negative Recommendation with LLMs [5.832474387562381]
Research into users' negative preferences has gained increasing importance in modern recommendation systems.<n>Most existing approaches primarily use negative feedback as an auxiliary signal to enhance positive recommendations.<n>We propose the first large language model framework for negative feedback modeling with special designed context-discerning modules.
arXiv Detail & Related papers (2026-01-22T07:46:18Z) - Correct and Weight: A Simple Yet Effective Loss for Implicit Feedback Recommendation [36.820719132176315]
This paper introduces a novel and principled loss function, named Corrected and Weighted (CW) loss.<n>CW loss systematically corrects for the impact of false negatives within the training objective.<n> experiments conducted on four large-scale, sparse benchmark datasets demonstrate the superiority of our proposed loss.
arXiv Detail & Related papers (2026-01-07T15:20:27Z) - User Hesitation and Negative Transfer in Multi-Behavior Recommendation [55.78729938627577]
We propose a recommendation framework focused on weak signal learning, termed HNT.<n>By learning the characteristics of auxiliary behaviors that lead to target behaviors, HNT identifies similar auxiliary behaviors that did not trigger the target behavior.<n> Experiments on three real-world datasets demonstrate that HNT improves HR@10 and NDCG@10 by 12.57% and 14.37%, respectively.
arXiv Detail & Related papers (2025-11-08T02:45:32Z) - ReNeg: Learning Negative Embedding with Reward Guidance [69.81219455975477]
In text-to-image (T2I) generation applications, negative embeddings have proven to be a simple yet effective approach for enhancing generation quality.<n>We introduce ReNeg, an end-to-end method designed to learn improved Negative embeddings guided by a Reward model.
arXiv Detail & Related papers (2024-12-27T13:31:55Z) - Learning Recommender Systems with Soft Target: A Decoupled Perspective [49.83787742587449]
We propose a novel decoupled soft label optimization framework to consider the objectives as two aspects by leveraging soft labels.
We present a sensible soft-label generation algorithm that models a label propagation algorithm to explore users' latent interests in unobserved feedback via neighbors.
arXiv Detail & Related papers (2024-10-09T04:20:15Z) - Learning from negative feedback, or positive feedback or both [21.95277469346728]
We introduce a novel approach that decouples learning from positive and negative feedback.<n>A key contribution is demonstrating stable learning from negative feedback alone.
arXiv Detail & Related papers (2024-10-05T14:04:03Z) - Negative Sampling in Recommendation: A Survey and Future Directions [43.11318243903388]
Recommender system (RS) aims to capture personalized preferences from massive user behaviors.<n>Negative sampling is proficients in revealing the genuine negative aspect inherent in user behaviors.
arXiv Detail & Related papers (2024-09-11T12:48:52Z) - Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation [67.88747330066049]
Fine-grained feedback captures nuanced distinctions in image quality and prompt-alignment.
We show that demonstrating its superiority to coarse-grained feedback is not automatic.
We identify key challenges in eliciting and utilizing fine-grained feedback.
arXiv Detail & Related papers (2024-06-24T17:19:34Z) - Towards Unified Modeling for Positive and Negative Preferences in
Sign-Aware Recommendation [13.300975621769396]
We propose a novel textbfLight textbfSigned textbfGraph Convolution Network specifically for textbfRecommendation (textbfLSGRec)
For the negative preferences within high-order heterogeneous interactions, first-order negative preferences are captured by the negative links.
recommendation results are generated based on positive preferences and optimized with negative ones.
arXiv Detail & Related papers (2024-03-13T05:00:42Z) - Learning from Negative User Feedback and Measuring Responsiveness for
Sequential Recommenders [13.762960304406016]
We introduce explicit and implicit negative user feedback into the training objective of sequential recommenders.
We demonstrate the effectiveness of this approach using live experiments on a large-scale industrial recommender system.
arXiv Detail & Related papers (2023-08-23T17:16:07Z) - Generating Negative Samples for Sequential Recommendation [83.60655196391855]
We propose to Generate Negative Samples (items) for Sequential Recommendation (SR)
A negative item is sampled at each time step based on the current SR model's learned user preferences toward items.
Experiments on four public datasets verify the importance of providing high-quality negative samples for SR.
arXiv Detail & Related papers (2022-08-07T05:44:13Z) - Learning Robust Recommender from Noisy Implicit Feedback [140.7090392887355]
We propose a new training strategy named Adaptive Denoising Training (ADT)
ADT adaptively prunes the noisy interactions by two paradigms (i.e., Truncated Loss and Reweighted Loss)
We consider extra feedback (e.g., rating) as auxiliary signal and propose three strategies to incorporate extra feedback into ADT.
arXiv Detail & Related papers (2021-12-02T12:12:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.