Existence conditions for hidden feedback loops in online recommender
systems
- URL: http://arxiv.org/abs/2109.05278v1
- Date: Sat, 11 Sep 2021 13:30:08 GMT
- Title: Existence conditions for hidden feedback loops in online recommender
systems
- Authors: Anton S. Khritankov and Anton A. Pilkevich
- Abstract summary: We study how uncertainty and noise in user interests influence the existence of feedback loops.
A non-zero probability of resetting user interests is sufficient to limit the feedback loop and estimate the size of the effect.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We explore a hidden feedback loops effect in online recommender systems.
Feedback loops result in degradation of online multi-armed bandit (MAB)
recommendations to a small subset and loss of coverage and novelty. We study
how uncertainty and noise in user interests influence the existence of feedback
loops. First, we show that an unbiased additive random noise in user interests
does not prevent a feedback loop. Second, we demonstrate that a non-zero
probability of resetting user interests is sufficient to limit the feedback
loop and estimate the size of the effect. Our experiments confirm the
theoretical findings in a simulated environment for four bandit algorithms.
Related papers
- Algorithmic Drift: A Simulation Framework to Study the Effects of Recommender Systems on User Preferences [7.552217586057245]
We propose a simulation framework that mimics user-recommender system interactions in a long-term scenario.
We introduce two novel metrics for quantifying the algorithm's impact on user preferences, specifically in terms of drift over time.
arXiv Detail & Related papers (2024-09-24T21:54:22Z) - Contextual Bandit with Herding Effects: Algorithms and Recommendation Applications [17.865143559133994]
"Herding effects" bias user feedback toward historical ratings, breaking down the assumption of unbiased feedback inherent in contextual bandits.
This paper develops a novel variant of the contextual bandit that is tailored to address the feedback bias caused by the herding effects.
We show that TS-Conf effectively mitigates the negative impact of herding effects, resulting in faster learning and improved recommendation accuracy.
arXiv Detail & Related papers (2024-08-26T17:20:34Z) - Neural Dueling Bandits [58.90189511247936]
We use a neural network to estimate the reward function using preference feedback for the previously selected arms.
We then extend our theoretical results to contextual bandit problems with binary feedback, which is in itself a non-trivial contribution.
arXiv Detail & Related papers (2024-07-24T09:23:22Z) - Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation [67.88747330066049]
Fine-grained feedback captures nuanced distinctions in image quality and prompt-alignment.
We show that demonstrating its superiority to coarse-grained feedback is not automatic.
We identify key challenges in eliciting and utilizing fine-grained feedback.
arXiv Detail & Related papers (2024-06-24T17:19:34Z) - Source Echo Chamber: Exploring the Escalation of Source Bias in User, Data, and Recommender System Feedback Loop [65.23044868332693]
We investigate the impact of source bias on the realm of recommender systems.
We show the prevalence of source bias and reveal a potential digital echo chamber with source bias amplification.
We introduce a black-box debiasing method that maintains model impartiality towards both HGC and AIGC.
arXiv Detail & Related papers (2024-05-28T09:34:50Z) - DPR: An Algorithm Mitigate Bias Accumulation in Recommendation feedback
loops [41.21024436158042]
We study the negative impact of feedback loops and unknown exposure mechanisms on recommendation quality and user experience.
We propose Dynamic Personalized Ranking (textbfDPR), an unbiased algorithm that uses dynamic re-weighting to mitigate the cross-effects.
We show theoretically that our approach mitigates the negative effects of feedback loops and unknown exposure mechanisms.
arXiv Detail & Related papers (2023-11-10T04:36:00Z) - Breaking Feedback Loops in Recommender Systems with Causal Inference [99.22185950608838]
Recent work has shown that feedback loops may compromise recommendation quality and homogenize user behavior.
We propose the Causal Adjustment for Feedback Loops (CAFL), an algorithm that provably breaks feedback loops using causal inference.
We show that CAFL improves recommendation quality when compared to prior correction methods.
arXiv Detail & Related papers (2022-07-04T17:58:39Z) - Simulating Bandit Learning from User Feedback for Extractive Question
Answering [51.97943858898579]
We study learning from user feedback for extractive question answering by simulating feedback using supervised data.
We show that systems initially trained on a small number of examples can dramatically improve given feedback from users on model-predicted answers.
arXiv Detail & Related papers (2022-03-18T17:47:58Z) - Learning Robust Recommender from Noisy Implicit Feedback [140.7090392887355]
We propose a new training strategy named Adaptive Denoising Training (ADT)
ADT adaptively prunes the noisy interactions by two paradigms (i.e., Truncated Loss and Reweighted Loss)
We consider extra feedback (e.g., rating) as auxiliary signal and propose three strategies to incorporate extra feedback into ADT.
arXiv Detail & Related papers (2021-12-02T12:12:02Z) - Learning Multiclass Classifier Under Noisy Bandit Feedback [6.624726878647541]
We propose a novel approach to deal with noisy bandit feedback based on the unbiased estimator technique.
We show our approach's effectiveness using extensive experiments on several benchmark datasets.
arXiv Detail & Related papers (2020-06-05T16:31:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.