The Importance of Cognitive Biases in the Recommendation Ecosystem
- URL: http://arxiv.org/abs/2408.12492v2
- Date: Fri, 30 Aug 2024 07:58:46 GMT
- Title: The Importance of Cognitive Biases in the Recommendation Ecosystem
- Authors: Markus Schedl, Oleg Lesota, Stefan Brandl, Mohammad Lotfi, Gustavo Junior Escobedo Ticona, Shahed Masoudian,
- Abstract summary: We argue that cognitive biases also manifest in different parts of the recommendation ecosystem and at different stages of the recommendation process.
We provide empirical evidence that biases such as feature-positive effect, Ikea effect, and cultural homophily can be observed in various components of the recommendation pipeline.
We advocate for a prejudice-free consideration of cognitive biases to improve user and item models as well as recommendation algorithms.
- Score: 8.267786874280848
- License:
- Abstract: Cognitive biases have been studied in psychology, sociology, and behavioral economics for decades. Traditionally, they have been considered a negative human trait that leads to inferior decision-making, reinforcement of stereotypes, or can be exploited to manipulate consumers, respectively. We argue that cognitive biases also manifest in different parts of the recommendation ecosystem and at different stages of the recommendation process. More importantly, we contest this traditional detrimental perspective on cognitive biases and claim that certain cognitive biases can be beneficial when accounted for by recommender systems. Concretely, we provide empirical evidence that biases such as feature-positive effect, Ikea effect, and cultural homophily can be observed in various components of the recommendation pipeline, including input data (such as ratings or side information), recommendation algorithm or model (and consequently recommended items), and user interactions with the system. In three small experiments covering recruitment and entertainment domains, we study the pervasiveness of the aforementioned biases. We ultimately advocate for a prejudice-free consideration of cognitive biases to improve user and item models as well as recommendation algorithms.
Related papers
- Cognitive Biases in Large Language Models for News Recommendation [68.90354828533535]
This paper explores the potential impact of cognitive biases on large language models (LLMs) based news recommender systems.
We discuss strategies to mitigate these biases through data augmentation, prompt engineering and learning algorithms aspects.
arXiv Detail & Related papers (2024-10-03T18:42:07Z) - Transparency, Privacy, and Fairness in Recommender Systems [0.19036571490366497]
This habilitation elaborates on aspects related to (i) transparency and cognitive models, (ii) privacy and limited preference information, and (iii) fairness and popularity bias in recommender systems.
arXiv Detail & Related papers (2024-06-17T08:37:14Z) - Source Echo Chamber: Exploring the Escalation of Source Bias in User, Data, and Recommender System Feedback Loop [65.23044868332693]
We investigate the impact of source bias on the realm of recommender systems.
We show the prevalence of source bias and reveal a potential digital echo chamber with source bias amplification.
We introduce a black-box debiasing method that maintains model impartiality towards both HGC and AIGC.
arXiv Detail & Related papers (2024-05-28T09:34:50Z) - A First Look at Selection Bias in Preference Elicitation for Recommendation [64.44255178199846]
We study the effect of selection bias in preference elicitation on the resulting recommendations.
A big hurdle is the lack of any publicly available dataset that has preference elicitation interactions.
We propose a simulation of a topic-based preference elicitation process.
arXiv Detail & Related papers (2024-05-01T14:56:56Z) - Cognitive Bias in Decision-Making with LLMs [19.87475562475802]
Large language models (LLMs) offer significant potential as tools to support an expanding range of decision-making tasks.
LLMs have been shown to inherit societal biases against protected groups, as well as be subject to bias functionally resembling cognitive bias.
Our work introduces BiasBuster, a framework designed to uncover, evaluate, and mitigate cognitive bias in LLMs.
arXiv Detail & Related papers (2024-02-25T02:35:56Z) - Personalized Detection of Cognitive Biases in Actions of Users from
Their Logs: Anchoring and Recency Biases [9.445205340175555]
We focus on two cognitive biases - anchoring and recency.
The recognition of cognitive bias in computer science is largely in the domain of information retrieval.
We offer a principled approach along with Machine Learning to detect these two cognitive biases from Web logs of users' actions.
arXiv Detail & Related papers (2022-06-30T08:51:15Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Correcting the User Feedback-Loop Bias for Recommendation Systems [34.44834423714441]
We propose a systematic and dynamic way to correct user feedback-loop bias in recommendation systems.
Our method includes a deep-learning component to learn each user's dynamic rating history embedding.
We empirically validated the existence of such user feedback-loop bias in real world recommendation systems.
arXiv Detail & Related papers (2021-09-13T15:02:55Z) - Heterogeneous Demand Effects of Recommendation Strategies in a Mobile
Application: Evidence from Econometric Models and Machine-Learning
Instruments [73.7716728492574]
We study the effectiveness of various recommendation strategies in the mobile channel and their impact on consumers' utility and demand levels for individual products.
We find significant differences in effectiveness among various recommendation strategies.
We develop novel econometric instruments that capture product differentiation (isolation) based on deep-learning models of user-generated reviews.
arXiv Detail & Related papers (2021-02-20T22:58:54Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Modeling and Counteracting Exposure Bias in Recommender Systems [0.0]
We study the bias inherent in widely used recommendation strategies such as matrix factorization.
We propose new debiasing strategies for recommender systems.
Our results show that recommender systems are biased and depend on the prior exposure of the user.
arXiv Detail & Related papers (2020-01-01T00:12:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.