Correcting the User Feedback-Loop Bias for Recommendation Systems
- URL: http://arxiv.org/abs/2109.06037v1
- Date: Mon, 13 Sep 2021 15:02:55 GMT
- Title: Correcting the User Feedback-Loop Bias for Recommendation Systems
- Authors: Weishen Pan, Sen Cui, Hongyi Wen, Kun Chen, Changshui Zhang, Fei Wang
- Abstract summary: We propose a systematic and dynamic way to correct user feedback-loop bias in recommendation systems.
Our method includes a deep-learning component to learn each user's dynamic rating history embedding.
We empirically validated the existence of such user feedback-loop bias in real world recommendation systems.
- Score: 34.44834423714441
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Selection bias is prevalent in the data for training and evaluating
recommendation systems with explicit feedback. For example, users tend to rate
items they like. However, when rating an item concerning a specific user, most
of the recommendation algorithms tend to rely too much on his/her rating
(feedback) history. This introduces implicit bias on the recommendation system,
which is referred to as user feedback-loop bias in this paper. We propose a
systematic and dynamic way to correct such bias and to obtain more diverse and
objective recommendations by utilizing temporal rating information.
Specifically, our method includes a deep-learning component to learn each
user's dynamic rating history embedding for the estimation of the probability
distribution of the items that the user rates sequentially. These estimated
dynamic exposure probabilities are then used as propensity scores to train an
inverse-propensity-scoring (IPS) rating predictor. We empirically validated the
existence of such user feedback-loop bias in real world recommendation systems
and compared the performance of our method with the baseline models that are
either without de-biasing or with propensity scores estimated by other methods.
The results show the superiority of our approach.
Related papers
- Correcting for Popularity Bias in Recommender Systems via Item Loss Equalization [1.7771454131646311]
A small set of popular items dominate the recommendation results due to their high interaction rates.
This phenomenon disproportionately benefits users with mainstream tastes while neglecting those with niche interests.
We propose an in-processing approach to address this issue by intervening in the training process of recommendation models.
arXiv Detail & Related papers (2024-10-07T08:34:18Z) - Treatment Effect Estimation for User Interest Exploration on Recommender Systems [10.05609996672672]
We propose an Uplift model-based Recommender framework, which regards top-N recommendation as a treatment optimization problem.
UpliftRec estimates the treatment effects, i.e., the click-through rate (CTR) under different category exposure ratios, by using observational user feedback.
UpliftRec calculates group-level treatment effects to discover users' hidden interests with high CTR rewards.
arXiv Detail & Related papers (2024-05-14T13:22:33Z) - Going Beyond Popularity and Positivity Bias: Correcting for Multifactorial Bias in Recommender Systems [74.47680026838128]
Two typical forms of bias in user interaction data with recommender systems (RSs) are popularity bias and positivity bias.
We consider multifactorial selection bias affected by both item and rating value factors.
We propose smoothing and alternating gradient descent techniques to reduce variance and improve the robustness of its optimization.
arXiv Detail & Related papers (2024-04-29T12:18:21Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - Unbiased Learning to Rank with Biased Continuous Feedback [5.561943356123711]
Unbiased learning-to-rank(LTR) algorithms are verified to model the relative relevance accurately based on noisy feedback.
To provide personalized high-quality recommendation results, recommender systems need model both categorical and continuous biased feedback.
We introduce the pairwise trust bias to separate the position bias, trust bias, and user relevance explicitly.
Experiment results on public benchmark datasets and internal live traffic of a large-scale recommender system at Tencent News show superior results for continuous labels.
arXiv Detail & Related papers (2023-03-08T02:14:08Z) - Breaking Feedback Loops in Recommender Systems with Causal Inference [99.22185950608838]
Recent work has shown that feedback loops may compromise recommendation quality and homogenize user behavior.
We propose the Causal Adjustment for Feedback Loops (CAFL), an algorithm that provably breaks feedback loops using causal inference.
We show that CAFL improves recommendation quality when compared to prior correction methods.
arXiv Detail & Related papers (2022-07-04T17:58:39Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Unbiased Pairwise Learning to Rank in Recommender Systems [4.058828240864671]
Unbiased learning to rank algorithms are appealing candidates and have already been applied in many applications with single categorical labels.
We propose a novel unbiased LTR algorithm to tackle the challenges, which innovatively models position bias in the pairwise fashion.
Experiment results on public benchmark datasets and internal live traffic show the superior results of the proposed method for both categorical and continuous labels.
arXiv Detail & Related papers (2021-11-25T06:04:59Z) - PURS: Personalized Unexpected Recommender System for Improving User
Satisfaction [76.98616102965023]
We describe a novel Personalized Unexpected Recommender System (PURS) model that incorporates unexpectedness into the recommendation process.
Extensive offline experiments on three real-world datasets illustrate that the proposed PURS model significantly outperforms the state-of-the-art baseline approaches.
arXiv Detail & Related papers (2021-06-05T01:33:21Z) - Measuring Recommender System Effects with Simulated Users [19.09065424910035]
Popularity bias and filter bubbles are two of the most well-studied recommender system biases.
We offer a simulation framework for measuring the impact of a recommender system under different types of user behavior.
arXiv Detail & Related papers (2021-01-12T14:51:11Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.