Learning from Negative User Feedback and Measuring Responsiveness for
Sequential Recommenders
- URL: http://arxiv.org/abs/2308.12256v1
- Date: Wed, 23 Aug 2023 17:16:07 GMT
- Title: Learning from Negative User Feedback and Measuring Responsiveness for
Sequential Recommenders
- Authors: Yueqi Wang, Yoni Halpern, Shuo Chang, Jingchen Feng, Elaine Ya Le,
Longfei Li, Xujian Liang, Min-Cheng Huang, Shane Li, Alex Beutel, Yaping
Zhang, Shuchao Bi
- Abstract summary: We introduce explicit and implicit negative user feedback into the training objective of sequential recommenders.
We demonstrate the effectiveness of this approach using live experiments on a large-scale industrial recommender system.
- Score: 13.762960304406016
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Sequential recommenders have been widely used in industry due to their
strength in modeling user preferences. While these models excel at learning a
user's positive interests, less attention has been paid to learning from
negative user feedback. Negative user feedback is an important lever of user
control, and comes with an expectation that recommenders should respond quickly
and reduce similar recommendations to the user. However, negative feedback
signals are often ignored in the training objective of sequential retrieval
models, which primarily aim at predicting positive user interactions. In this
work, we incorporate explicit and implicit negative user feedback into the
training objective of sequential recommenders in the retrieval stage using a
"not-to-recommend" loss function that optimizes for the log-likelihood of not
recommending items with negative feedback. We demonstrate the effectiveness of
this approach using live experiments on a large-scale industrial recommender
system. Furthermore, we address a challenge in measuring recommender
responsiveness to negative feedback by developing a counterfactual simulation
framework to compare recommender responses between different user actions,
showing improved responsiveness from the modeling change.
Related papers
- Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation [67.88747330066049]
Fine-grained feedback captures nuanced distinctions in image quality and prompt-alignment.
We show that demonstrating its superiority to coarse-grained feedback is not automatic.
We identify key challenges in eliciting and utilizing fine-grained feedback.
arXiv Detail & Related papers (2024-06-24T17:19:34Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - Recommendation with User Active Disclosing Willingness [20.306413327597603]
We study a novel recommendation paradigm, where the users are allowed to indicate their "willingness" on disclosing different behaviors.
We conduct extensive experiments to demonstrate the effectiveness of our model on balancing the recommendation quality and user disclosing willingness.
arXiv Detail & Related papers (2022-10-25T04:43:40Z) - Generating Negative Samples for Sequential Recommendation [83.60655196391855]
We propose to Generate Negative Samples (items) for Sequential Recommendation (SR)
A negative item is sampled at each time step based on the current SR model's learned user preferences toward items.
Experiments on four public datasets verify the importance of providing high-quality negative samples for SR.
arXiv Detail & Related papers (2022-08-07T05:44:13Z) - Breaking Feedback Loops in Recommender Systems with Causal Inference [99.22185950608838]
Recent work has shown that feedback loops may compromise recommendation quality and homogenize user behavior.
We propose the Causal Adjustment for Feedback Loops (CAFL), an algorithm that provably breaks feedback loops using causal inference.
We show that CAFL improves recommendation quality when compared to prior correction methods.
arXiv Detail & Related papers (2022-07-04T17:58:39Z) - Sparsity Regularization For Cold-Start Recommendation [7.848143873095096]
We introduce a novel representation for user-vectors by combining user demographics and user preferences.
We develop a novel sparse adversarial model, SRLGAN, for Cold-Start Recommendation leveraging the sparse user-purchase behavior.
We evaluate the SRLGAN on two popular datasets and demonstrate state-of-the-art results.
arXiv Detail & Related papers (2022-01-26T02:28:08Z) - Correcting the User Feedback-Loop Bias for Recommendation Systems [34.44834423714441]
We propose a systematic and dynamic way to correct user feedback-loop bias in recommendation systems.
Our method includes a deep-learning component to learn each user's dynamic rating history embedding.
We empirically validated the existence of such user feedback-loop bias in real world recommendation systems.
arXiv Detail & Related papers (2021-09-13T15:02:55Z) - PURS: Personalized Unexpected Recommender System for Improving User
Satisfaction [76.98616102965023]
We describe a novel Personalized Unexpected Recommender System (PURS) model that incorporates unexpectedness into the recommendation process.
Extensive offline experiments on three real-world datasets illustrate that the proposed PURS model significantly outperforms the state-of-the-art baseline approaches.
arXiv Detail & Related papers (2021-06-05T01:33:21Z) - Measuring Recommender System Effects with Simulated Users [19.09065424910035]
Popularity bias and filter bubbles are two of the most well-studied recommender system biases.
We offer a simulation framework for measuring the impact of a recommender system under different types of user behavior.
arXiv Detail & Related papers (2021-01-12T14:51:11Z) - Reward Constrained Interactive Recommendation with Natural Language
Feedback [158.8095688415973]
We propose a novel constraint-augmented reinforcement learning (RL) framework to efficiently incorporate user preferences over time.
Specifically, we leverage a discriminator to detect recommendations violating user historical preference.
Our proposed framework is general and is further extended to the task of constrained text generation.
arXiv Detail & Related papers (2020-05-04T16:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.