Bias vs Bias -- Dawn of Justice: A Fair Fight in Recommendation Systems
- URL: http://arxiv.org/abs/2506.18327v1
- Date: Mon, 23 Jun 2025 06:19:02 GMT
- Title: Bias vs Bias -- Dawn of Justice: A Fair Fight in Recommendation Systems
- Authors: Tahsin Alamgir Kheya, Mohamed Reda Bouadjenek, Sunil Aryal,
- Abstract summary: We propose a fairness-aware re-ranking approach that helps mitigate bias in different categories of items.<n>We show how our approach can mitigate bias on multiple sensitive attributes, including gender, age, and occupation.<n>Our results show how this approach helps mitigate social bias with little to no degradation in performance.
- Score: 2.124791625488617
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recommendation systems play a crucial role in our daily lives by impacting user experience across various domains, including e-commerce, job advertisements, entertainment, etc. Given the vital role of such systems in our lives, practitioners must ensure they do not produce unfair and imbalanced recommendations. Previous work addressing bias in recommendations overlooked bias in certain item categories, potentially leaving some biases unaddressed. Additionally, most previous work on fair re-ranking focused on binary-sensitive attributes. In this paper, we address these issues by proposing a fairness-aware re-ranking approach that helps mitigate bias in different categories of items. This re-ranking approach leverages existing biases to correct disparities in recommendations across various demographic groups. We show how our approach can mitigate bias on multiple sensitive attributes, including gender, age, and occupation. We experimented on three real-world datasets to evaluate the effectiveness of our re-ranking scheme in mitigating bias in recommendations. Our results show how this approach helps mitigate social bias with little to no degradation in performance.
Related papers
- Why Multi-Interest Fairness Matters: Hypergraph Contrastive Multi-Interest Learning for Fair Conversational Recommender System [55.39026603611269]
We propose a novel framework, Hypergraph Contrastive Multi-Interest Learning for Fair Conversational Recommender System (HyFairCRS)<n>HyFairCRS aims to promote multi-interest diversity fairness in dynamic and interactive Conversational Recommender Systems (CRSs)<n> Experiments on two CRS-based datasets show that HyFairCRS achieves a new state-of-the-art performance while effectively alleviating unfairness.
arXiv Detail & Related papers (2025-07-01T11:39:42Z) - Unmasking Gender Bias in Recommendation Systems and Enhancing Category-Aware Fairness [2.124791625488617]
We introduce a set of comprehensive metrics for gender bias in recommendations.<n>We show the importance of evaluating fairness on a more granular level.<n>We show that employing a category-aware fairness metric as a regularization term along with the main recommendation loss during training can help effectively minimize bias in the models' output.
arXiv Detail & Related papers (2025-02-25T07:37:28Z) - Reducing Popularity Influence by Addressing Position Bias [0.0]
We show that position debiasing can effectively reduce a skew in the popularity of items induced by the position bias through a feedback loop.<n>We show that position debiasing can significantly improve assortment utilization, without any degradation in user engagement or financial metrics.
arXiv Detail & Related papers (2024-12-11T21:16:37Z) - Unveiling and Mitigating Bias in Large Language Model Recommendations: A Path to Fairness [3.5297361401370044]
This study explores the interplay between bias and LLM-based recommendation systems.<n>It focuses on music, song, and book recommendations across diverse demographic and cultural groups.<n>Our findings reveal that bias in these systems is deeply ingrained, yet even simple interventions like prompt engineering can significantly reduce it.
arXiv Detail & Related papers (2024-09-17T01:37:57Z) - Going Beyond Popularity and Positivity Bias: Correcting for Multifactorial Bias in Recommender Systems [74.47680026838128]
Two typical forms of bias in user interaction data with recommender systems (RSs) are popularity bias and positivity bias.
We consider multifactorial selection bias affected by both item and rating value factors.
We propose smoothing and alternating gradient descent techniques to reduce variance and improve the robustness of its optimization.
arXiv Detail & Related papers (2024-04-29T12:18:21Z) - A Survey on Fairness-aware Recommender Systems [59.23208133653637]
We present concepts of fairness in different recommendation scenarios, comprehensively categorize current advances, and introduce typical methods to promote fairness in different stages of recommender systems.
Next, we delve into the significant influence that fairness-aware recommender systems exert on real-world industrial applications.
arXiv Detail & Related papers (2023-06-01T07:08:22Z) - Consumer-side Fairness in Recommender Systems: A Systematic Survey of
Methods and Evaluation [1.4123323039043334]
Growing awareness of discrimination in machine learning methods motivated both academia and industry to research how fairness can be ensured in recommender systems.
For recommender systems, such issues are well exemplified by occupation recommendation, where biases in historical data may lead to recommender systems relating one gender to lower wages or to the propagation of stereotypes.
This survey serves as a systematic overview and discussion of the current research on consumer-side fairness in recommender systems.
arXiv Detail & Related papers (2023-05-16T10:07:41Z) - Whole Page Unbiased Learning to Rank [59.52040055543542]
Unbiased Learning to Rank(ULTR) algorithms are proposed to learn an unbiased ranking model with biased click data.
We propose a Bias Agnostic whole-page unbiased Learning to rank algorithm, named BAL, to automatically find the user behavior model.
Experimental results on a real-world dataset verify the effectiveness of the BAL.
arXiv Detail & Related papers (2022-10-19T16:53:08Z) - Optimising Equal Opportunity Fairness in Model Training [60.0947291284978]
Existing debiasing methods, such as adversarial training and removing protected information from representations, have been shown to reduce bias.
We propose two novel training objectives which directly optimise for the widely-used criterion of it equal opportunity, and show that they are effective in reducing bias while maintaining high performance over two classification tasks.
arXiv Detail & Related papers (2022-05-05T01:57:58Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Correcting the User Feedback-Loop Bias for Recommendation Systems [34.44834423714441]
We propose a systematic and dynamic way to correct user feedback-loop bias in recommendation systems.
Our method includes a deep-learning component to learn each user's dynamic rating history embedding.
We empirically validated the existence of such user feedback-loop bias in real world recommendation systems.
arXiv Detail & Related papers (2021-09-13T15:02:55Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.