Hidden Author Bias in Book Recommendation
- URL: http://arxiv.org/abs/2209.00371v1
- Date: Thu, 1 Sep 2022 11:30:22 GMT
- Title: Hidden Author Bias in Book Recommendation
- Authors: Savvina Daniil, Mirjam Cuper, Cynthia C.S. Liem, Jacco van
Ossenbruggen, Laura Hollink
- Abstract summary: Collaborative filtering algorithms have the advantage of not requiring sensitive user or item information to provide recommendations.
We argue that popularity bias often leads to other biases that are not obvious when additional user or item information is not provided to the researcher.
- Score: 4.2628421392139
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Collaborative filtering algorithms have the advantage of not requiring
sensitive user or item information to provide recommendations. However, they
still suffer from fairness related issues, like popularity bias. In this work,
we argue that popularity bias often leads to other biases that are not obvious
when additional user or item information is not provided to the researcher. We
examine our hypothesis in the book recommendation case on a commonly used
dataset with book ratings. We enrich it with author information using publicly
available external sources. We find that popular books are mainly written by US
citizens in the dataset, and that these books tend to be recommended
disproportionally by popular collaborative filtering algorithms compared to the
users' profiles. We conclude that the societal implications of popularity bias
should be further examined by the scholar community.
Related papers
- From Lists to Emojis: How Format Bias Affects Model Alignment [67.08430328350327]
We study format biases in reinforcement learning from human feedback.
Many widely-used preference models, including human evaluators, exhibit strong biases towards specific format patterns.
We show that with a small amount of biased data, we can inject significant bias into the reward model.
arXiv Detail & Related papers (2024-09-18T05:13:18Z) - Large Language Models as Recommender Systems: A Study of Popularity Bias [46.17953988777199]
Popular items are disproportionately recommended, overshadowing less popular but potentially relevant items.
Recent advancements have seen the integration of general-purpose Large Language Models into recommender systems.
Our study explores whether LLMs contribute to or can alleviate popularity bias in recommender systems.
arXiv Detail & Related papers (2024-06-03T12:53:37Z) - Metrics for popularity bias in dynamic recommender systems [0.0]
Biased recommendations may lead to decisions that can potentially have adverse effects on individuals, sensitive user groups, and society.
This paper focuses on quantifying popularity bias that stems directly from the output of RecSys models.
Four metrics to quantify popularity bias in RescSys over time in dynamic setting across different sensitive user groups have been proposed.
arXiv Detail & Related papers (2023-10-12T16:15:30Z) - A Survey on Popularity Bias in Recommender Systems [5.952279576277445]
We discuss the potential reasons for popularity bias and review existing approaches to detect, mitigate and quantify popularity bias in recommender systems.
We critically discuss todays literature, where we observe that the research is almost entirely based on computational experiments and on certain assumptions regarding the practical effects of including long-tail items in the recommendations.
arXiv Detail & Related papers (2023-08-02T12:58:11Z) - Whole Page Unbiased Learning to Rank [59.52040055543542]
Unbiased Learning to Rank(ULTR) algorithms are proposed to learn an unbiased ranking model with biased click data.
We propose a Bias Agnostic whole-page unbiased Learning to rank algorithm, named BAL, to automatically find the user behavior model.
Experimental results on a real-world dataset verify the effectiveness of the BAL.
arXiv Detail & Related papers (2022-10-19T16:53:08Z) - Tag-Aware Document Representation for Research Paper Recommendation [68.8204255655161]
We propose a hybrid approach that leverages deep semantic representation of research papers based on social tags assigned by users.
The proposed model is effective in recommending research papers even when the rating data is very sparse.
arXiv Detail & Related papers (2022-09-08T09:13:07Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - The SAME score: Improved cosine based bias score for word embeddings [49.75878234192369]
We introduce SAME, a novel bias score for semantic bias in embeddings.
We show that SAME is capable of measuring semantic bias and identify potential causes for social bias in downstream tasks.
arXiv Detail & Related papers (2022-03-28T09:28:13Z) - The Unfairness of Popularity Bias in Book Recommendation [0.0]
Popularity bias refers to the problem that popular items are recommended frequently while less popular items are recommended rarely or not at all.
We analyze the well-known Book-Crossing dataset and define three user groups based on their tendency towards popular items.
Our results indicate that most state-of-the-art recommendation algorithms suffer from popularity bias in the book domain.
arXiv Detail & Related papers (2022-02-27T20:21:46Z) - An Adaptive Boosting Technique to Mitigate Popularity Bias in
Recommender System [1.5800354337004194]
A typical accuracy measure is biased towards popular items, i.e., it promotes better accuracy for popular items compared to non-popular items.
This paper considers a metric that measures the popularity bias as the difference in error on popular items and non-popular items.
Motivated by the fair boosting algorithm on classification, we propose an algorithm that reduces the popularity bias present in the data.
arXiv Detail & Related papers (2021-09-13T03:04:55Z) - Correcting Exposure Bias for Link Recommendation [31.799185352323807]
Exposure bias can arise when users are systematically underexposed to certain relevant items.
We propose estimators that leverage known exposure probabilities to mitigate this bias.
Our methods lead to greater diversity in the recommended papers' fields of study.
arXiv Detail & Related papers (2021-06-13T16:51:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.