The Unfairness of Popularity Bias in Book Recommendation
- URL: http://arxiv.org/abs/2202.13446v1
- Date: Sun, 27 Feb 2022 20:21:46 GMT
- Title: The Unfairness of Popularity Bias in Book Recommendation
- Authors: Mohammadmehdi Naghiaei, Hossein A. Rahmani, Mahdi Dehghan
- Abstract summary: Popularity bias refers to the problem that popular items are recommended frequently while less popular items are recommended rarely or not at all.
We analyze the well-known Book-Crossing dataset and define three user groups based on their tendency towards popular items.
Our results indicate that most state-of-the-art recommendation algorithms suffer from popularity bias in the book domain.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent studies have shown that recommendation systems commonly suffer from
popularity bias. Popularity bias refers to the problem that popular items
(i.e., frequently rated items) are recommended frequently while less popular
items are recommended rarely or not at all. Researchers adopted two approaches
to examining popularity bias: (i) from the users' perspective, by analyzing how
far a recommendation system deviates from user's expectations in receiving
popular items, and (ii) by analyzing the amount of exposure that long-tail
items receive, measured by overall catalog coverage and novelty. In this paper,
we examine the first point of view in the book domain, although the findings
may be applied to other domains as well. To this end, we analyze the well-known
Book-Crossing dataset and define three user groups based on their tendency
towards popular items (i.e., Niche, Diverse, Bestseller-focused). Further, we
evaluate the performance of nine state-of-the-art recommendation algorithms and
two baselines (i.e., Random, MostPop) from both the accuracy (e.g., NDCG,
Precision, Recall) and popularity bias perspectives. Our results indicate that
most state-of-the-art recommendation algorithms suffer from popularity bias in
the book domain, and fail to meet users' expectations with Niche and Diverse
tastes despite having a larger profile size. Conversely, Bestseller-focused
users are more likely to receive high-quality recommendations, both in terms of
fairness and personalization. Furthermore, our study shows a tradeoff between
personalization and unfairness of popularity bias in recommendation algorithms
for users belonging to the Diverse and Bestseller groups, that is, algorithms
with high capability of personalization suffer from the unfairness of
popularity bias.
Related papers
- Large Language Models as Recommender Systems: A Study of Popularity Bias [46.17953988777199]
Popular items are disproportionately recommended, overshadowing less popular but potentially relevant items.
Recent advancements have seen the integration of general-purpose Large Language Models into recommender systems.
Our study explores whether LLMs contribute to or can alleviate popularity bias in recommender systems.
arXiv Detail & Related papers (2024-06-03T12:53:37Z) - GPTBIAS: A Comprehensive Framework for Evaluating Bias in Large Language
Models [83.30078426829627]
Large language models (LLMs) have gained popularity and are being widely adopted by a large user community.
The existing evaluation methods have many constraints, and their results exhibit a limited degree of interpretability.
We propose a bias evaluation framework named GPTBIAS that leverages the high performance of LLMs to assess bias in models.
arXiv Detail & Related papers (2023-12-11T12:02:14Z) - Fairness Through Domain Awareness: Mitigating Popularity Bias For Music
Discovery [56.77435520571752]
We explore the intrinsic relationship between music discovery and popularity bias.
We propose a domain-aware, individual fairness-based approach which addresses popularity bias in graph neural network (GNNs) based recommender systems.
Our approach uses individual fairness to reflect a ground truth listening experience, i.e., if two songs sound similar, this similarity should be reflected in their representations.
arXiv Detail & Related papers (2023-08-28T14:12:25Z) - A Survey on Popularity Bias in Recommender Systems [5.952279576277445]
We discuss the potential reasons for popularity bias and review existing approaches to detect, mitigate and quantify popularity bias in recommender systems.
We critically discuss todays literature, where we observe that the research is almost entirely based on computational experiments and on certain assumptions regarding the practical effects of including long-tail items in the recommendations.
arXiv Detail & Related papers (2023-08-02T12:58:11Z) - Ranking with Popularity Bias: User Welfare under Self-Amplification
Dynamics [19.59766711993837]
We propose and theoretically analyze a general mechanism by which item popularity, item quality, and position bias jointly impact user choice.
We show that naive popularity-biased recommenders induce linear regret by conflating item quality and popularity.
arXiv Detail & Related papers (2023-05-24T22:38:19Z) - Hidden Author Bias in Book Recommendation [4.2628421392139]
Collaborative filtering algorithms have the advantage of not requiring sensitive user or item information to provide recommendations.
We argue that popularity bias often leads to other biases that are not obvious when additional user or item information is not provided to the researcher.
arXiv Detail & Related papers (2022-09-01T11:30:22Z) - Reconciling the Quality vs Popularity Dichotomy in Online Cultural
Markets [62.146882023375746]
We propose a model of an idealized online cultural market in which $N$ items, endowed with a hidden quality metric, are recommended to users by a ranking algorithm possibly biased by the current items' popularity.
Our goal is to better understand the underlying mechanisms of the well-known fact that popularity bias can prevent higher-quality items from becoming more popular than lower-quality items, producing an undesirable misalignment between quality and popularity rankings.
arXiv Detail & Related papers (2022-04-28T14:36:11Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - An Adaptive Boosting Technique to Mitigate Popularity Bias in
Recommender System [1.5800354337004194]
A typical accuracy measure is biased towards popular items, i.e., it promotes better accuracy for popular items compared to non-popular items.
This paper considers a metric that measures the popularity bias as the difference in error on popular items and non-popular items.
Motivated by the fair boosting algorithm on classification, we propose an algorithm that reduces the popularity bias present in the data.
arXiv Detail & Related papers (2021-09-13T03:04:55Z) - User-centered Evaluation of Popularity Bias in Recommender Systems [4.30484058393522]
Recommendation and ranking systems suffer from popularity bias; the tendency of the algorithm to favor a few popular items while under-representing the majority of other items.
In this paper, we show the limitations of the existing metrics to evaluate popularity bias mitigation when we want to assess these algorithms from the users' perspective.
We present an effective approach that mitigates popularity bias from the user-centered point of view.
arXiv Detail & Related papers (2021-03-10T22:12:51Z) - SetRank: A Setwise Bayesian Approach for Collaborative Ranking from
Implicit Feedback [50.13745601531148]
We propose a novel setwise Bayesian approach for collaborative ranking, namely SetRank, to accommodate the characteristics of implicit feedback in recommender system.
Specifically, SetRank aims at maximizing the posterior probability of novel setwise preference comparisons.
We also present the theoretical analysis of SetRank to show that the bound of excess risk can be proportional to $sqrtM/N$.
arXiv Detail & Related papers (2020-02-23T06:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.