Breaking Feedback Loops in Recommender Systems with Causal Inference
- URL: http://arxiv.org/abs/2207.01616v1
- Date: Mon, 4 Jul 2022 17:58:39 GMT
- Title: Breaking Feedback Loops in Recommender Systems with Causal Inference
- Authors: Karl Krauth, Yixin Wang, Michael I. Jordan
- Abstract summary: Recent work has shown that feedback loops may compromise recommendation quality and homogenize user behavior.
We propose the Causal Adjustment for Feedback Loops (CAFL), an algorithm that provably breaks feedback loops using causal inference.
We show that CAFL improves recommendation quality when compared to prior correction methods.
- Score: 99.22185950608838
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recommender systems play a key role in shaping modern web ecosystems. These
systems alternate between (1) making recommendations (2) collecting user
responses to these recommendations, and (3) retraining the recommendation
algorithm based on this feedback. During this process the recommender system
influences the user behavioral data that is subsequently used to update it,
thus creating a feedback loop. Recent work has shown that feedback loops may
compromise recommendation quality and homogenize user behavior, raising ethical
and performance concerns when deploying recommender systems. To address these
issues, we propose the Causal Adjustment for Feedback Loops (CAFL), an
algorithm that provably breaks feedback loops using causal inference and can be
applied to any recommendation algorithm that optimizes a training loss. Our
main observation is that a recommender system does not suffer from feedback
loops if it reasons about causal quantities, namely the intervention
distributions of recommendations on user ratings. Moreover, we can calculate
this intervention distribution from observational data by adjusting for the
recommender system's predictions of user preferences. Using simulated
environments, we demonstrate that CAFL improves recommendation quality when
compared to prior correction methods.
Related papers
- Algorithmic Drift: A Simulation Framework to Study the Effects of Recommender Systems on User Preferences [7.552217586057245]
We propose a simulation framework that mimics user-recommender system interactions in a long-term scenario.
We introduce two novel metrics for quantifying the algorithm's impact on user preferences, specifically in terms of drift over time.
arXiv Detail & Related papers (2024-09-24T21:54:22Z) - Source Echo Chamber: Exploring the Escalation of Source Bias in User, Data, and Recommender System Feedback Loop [65.23044868332693]
We investigate the impact of source bias on the realm of recommender systems.
We show the prevalence of source bias and reveal a potential digital echo chamber with source bias amplification.
We introduce a black-box debiasing method that maintains model impartiality towards both HGC and AIGC.
arXiv Detail & Related papers (2024-05-28T09:34:50Z) - AdaRec: Adaptive Sequential Recommendation for Reinforcing Long-term
User Engagement [25.18963930580529]
We introduce a novel paradigm called Adaptive Sequential Recommendation (AdaRec) to address this issue.
AdaRec proposes a new distance-based representation loss to extract latent information from users' interaction trajectories.
We conduct extensive empirical analyses in both simulator-based and live sequential recommendation tasks.
arXiv Detail & Related papers (2023-10-06T02:45:21Z) - Learning from Negative User Feedback and Measuring Responsiveness for
Sequential Recommenders [13.762960304406016]
We introduce explicit and implicit negative user feedback into the training objective of sequential recommenders.
We demonstrate the effectiveness of this approach using live experiments on a large-scale industrial recommender system.
arXiv Detail & Related papers (2023-08-23T17:16:07Z) - User-Controllable Recommendation via Counterfactual Retrospective and
Prospective Explanations [96.45414741693119]
We present a user-controllable recommender system that seamlessly integrates explainability and controllability.
By providing both retrospective and prospective explanations through counterfactual reasoning, users can customize their control over the system.
arXiv Detail & Related papers (2023-08-02T01:13:36Z) - Learning Robust Recommender from Noisy Implicit Feedback [140.7090392887355]
We propose a new training strategy named Adaptive Denoising Training (ADT)
ADT adaptively prunes the noisy interactions by two paradigms (i.e., Truncated Loss and Reweighted Loss)
We consider extra feedback (e.g., rating) as auxiliary signal and propose three strategies to incorporate extra feedback into ADT.
arXiv Detail & Related papers (2021-12-02T12:12:02Z) - Correcting the User Feedback-Loop Bias for Recommendation Systems [34.44834423714441]
We propose a systematic and dynamic way to correct user feedback-loop bias in recommendation systems.
Our method includes a deep-learning component to learn each user's dynamic rating history embedding.
We empirically validated the existence of such user feedback-loop bias in real world recommendation systems.
arXiv Detail & Related papers (2021-09-13T15:02:55Z) - Existence conditions for hidden feedback loops in online recommender
systems [0.0]
We study how uncertainty and noise in user interests influence the existence of feedback loops.
A non-zero probability of resetting user interests is sufficient to limit the feedback loop and estimate the size of the effect.
arXiv Detail & Related papers (2021-09-11T13:30:08Z) - Knowledge Transfer via Pre-training for Recommendation: A Review and
Prospect [89.91745908462417]
We show the benefits of pre-training to recommender systems through experiments.
We discuss several promising directions for future research for recommender systems with pre-training.
arXiv Detail & Related papers (2020-09-19T13:06:27Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.