Metrics for popularity bias in dynamic recommender systems
- URL: http://arxiv.org/abs/2310.08455v1
- Date: Thu, 12 Oct 2023 16:15:30 GMT
- Title: Metrics for popularity bias in dynamic recommender systems
- Authors: Valentijn Braun, Debarati Bhaumik, and Diptish Dey
- Abstract summary: Biased recommendations may lead to decisions that can potentially have adverse effects on individuals, sensitive user groups, and society.
This paper focuses on quantifying popularity bias that stems directly from the output of RecSys models.
Four metrics to quantify popularity bias in RescSys over time in dynamic setting across different sensitive user groups have been proposed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Albeit the widespread application of recommender systems (RecSys) in our
daily lives, rather limited research has been done on quantifying unfairness
and biases present in such systems. Prior work largely focuses on determining
whether a RecSys is discriminating or not but does not compute the amount of
bias present in these systems. Biased recommendations may lead to decisions
that can potentially have adverse effects on individuals, sensitive user
groups, and society. Hence, it is important to quantify these biases for fair
and safe commercial applications of these systems. This paper focuses on
quantifying popularity bias that stems directly from the output of RecSys
models, leading to over recommendation of popular items that are likely to be
misaligned with user preferences. Four metrics to quantify popularity bias in
RescSys over time in dynamic setting across different sensitive user groups
have been proposed. These metrics have been demonstrated for four collaborative
filtering based RecSys algorithms trained on two commonly used benchmark
datasets in the literature. Results obtained show that the metrics proposed
provide a comprehensive understanding of growing disparities in treatment
between sensitive groups over time when used conjointly.
Related papers
- Measuring and Addressing Indexical Bias in Information Retrieval [69.7897730778898]
PAIR framework supports automatic bias audits for ranked documents or entire IR systems.
After introducing DUO, we run an extensive evaluation of 8 IR systems on a new corpus of 32k synthetic and 4.7k natural documents.
A human behavioral study validates our approach, showing that our bias metric can help predict when and how indexical bias will shift a reader's opinion.
arXiv Detail & Related papers (2024-06-06T17:42:37Z) - Going Beyond Popularity and Positivity Bias: Correcting for Multifactorial Bias in Recommender Systems [74.47680026838128]
Two typical forms of bias in user interaction data with recommender systems (RSs) are popularity bias and positivity bias.
We consider multifactorial selection bias affected by both item and rating value factors.
We propose smoothing and alternating gradient descent techniques to reduce variance and improve the robustness of its optimization.
arXiv Detail & Related papers (2024-04-29T12:18:21Z) - GPTBIAS: A Comprehensive Framework for Evaluating Bias in Large Language
Models [83.30078426829627]
Large language models (LLMs) have gained popularity and are being widely adopted by a large user community.
The existing evaluation methods have many constraints, and their results exhibit a limited degree of interpretability.
We propose a bias evaluation framework named GPTBIAS that leverages the high performance of LLMs to assess bias in models.
arXiv Detail & Related papers (2023-12-11T12:02:14Z) - Unbiased Learning to Rank with Biased Continuous Feedback [5.561943356123711]
Unbiased learning-to-rank(LTR) algorithms are verified to model the relative relevance accurately based on noisy feedback.
To provide personalized high-quality recommendation results, recommender systems need model both categorical and continuous biased feedback.
We introduce the pairwise trust bias to separate the position bias, trust bias, and user relevance explicitly.
Experiment results on public benchmark datasets and internal live traffic of a large-scale recommender system at Tencent News show superior results for continuous labels.
arXiv Detail & Related papers (2023-03-08T02:14:08Z) - Managing multi-facet bias in collaborative filtering recommender systems [0.0]
Biased recommendations across groups of items can endanger the interests of item providers along with causing user dissatisfaction with the system.
This study aims to manage a new type of intersectional bias regarding the geographical origin and popularity of items in the output of state-of-the-art collaborative filtering recommender algorithms.
Extensive experiments on two real-world datasets of movies and books, enriched with the items' continents of production, show that the proposed algorithm strikes a reasonable balance between accuracy and both types of the mentioned biases.
arXiv Detail & Related papers (2023-02-21T10:06:01Z) - Joint Multisided Exposure Fairness for Recommendation [76.75990595228666]
This paper formalizes a family of exposure fairness metrics that model the problem jointly from the perspective of both the consumers and producers.
Specifically, we consider group attributes for both types of stakeholders to identify and mitigate fairness concerns that go beyond individual users and items towards more systemic biases in recommendation.
arXiv Detail & Related papers (2022-04-29T19:13:23Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Unbiased Pairwise Learning to Rank in Recommender Systems [4.058828240864671]
Unbiased learning to rank algorithms are appealing candidates and have already been applied in many applications with single categorical labels.
We propose a novel unbiased LTR algorithm to tackle the challenges, which innovatively models position bias in the pairwise fashion.
Experiment results on public benchmark datasets and internal live traffic show the superior results of the proposed method for both categorical and continuous labels.
arXiv Detail & Related papers (2021-11-25T06:04:59Z) - Correcting the User Feedback-Loop Bias for Recommendation Systems [34.44834423714441]
We propose a systematic and dynamic way to correct user feedback-loop bias in recommendation systems.
Our method includes a deep-learning component to learn each user's dynamic rating history embedding.
We empirically validated the existence of such user feedback-loop bias in real world recommendation systems.
arXiv Detail & Related papers (2021-09-13T15:02:55Z) - Estimation of Fair Ranking Metrics with Incomplete Judgments [70.37717864975387]
We propose a sampling strategy and estimation technique for four fair ranking metrics.
We formulate a robust and unbiased estimator which can operate even with very limited number of labeled items.
arXiv Detail & Related papers (2021-08-11T10:57:00Z) - User-centered Evaluation of Popularity Bias in Recommender Systems [4.30484058393522]
Recommendation and ranking systems suffer from popularity bias; the tendency of the algorithm to favor a few popular items while under-representing the majority of other items.
In this paper, we show the limitations of the existing metrics to evaluate popularity bias mitigation when we want to assess these algorithms from the users' perspective.
We present an effective approach that mitigates popularity bias from the user-centered point of view.
arXiv Detail & Related papers (2021-03-10T22:12:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.