The Unfairness of Multifactorial Bias in Recommendation
- URL: http://arxiv.org/abs/2601.12828v1
- Date: Mon, 19 Jan 2026 08:37:43 GMT
- Title: The Unfairness of Multifactorial Bias in Recommendation
- Authors: Masoud Mansoury, Jin Huang, Mykola Pechenizkiy, Herke van Hoof, Maarten de Rijke,
- Abstract summary: Popularity bias and positivity bias are prominent sources of bias in recommender systems.<n>In this work, we examine how multifactorial bias influences item-side fairness.<n>We adapt a percentile-based rating transformation as a pre-processing strategy to mitigate multifactorial bias.
- Score: 68.35079031029616
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Popularity bias and positivity bias are two prominent sources of bias in recommender systems. Both arise from input data, propagate through recommendation models, and lead to unfair or suboptimal outcomes. Popularity bias occurs when a small subset of items receives most interactions, while positivity bias stems from the over-representation of high rating values. Although each bias has been studied independently, their combined effect, to which we refer to as multifactorial bias, remains underexplored. In this work, we examine how multifactorial bias influences item-side fairness, focusing on exposure bias, which reflects the unequal visibility of items in recommendation outputs. Through simulation studies, we find that positivity bias is disproportionately concentrated on popular items, further amplifying their over-exposure. Motivated by this insight, we adapt a percentile-based rating transformation as a pre-processing strategy to mitigate multifactorial bias. Experiments using six recommendation algorithms across four public datasets show that this approach improves exposure fairness with negligible accuracy loss. We also demonstrate that integrating this pre-processing step into post-processing fairness pipelines enhances their effectiveness and efficiency, enabling comparable or better fairness with reduced computational cost. These findings highlight the importance of addressing multifactorial bias and demonstrate the practical value of simple, data-driven pre-processing methods for improving fairness in recommender systems.
Related papers
- Bias vs Bias -- Dawn of Justice: A Fair Fight in Recommendation Systems [2.124791625488617]
We propose a fairness-aware re-ranking approach that helps mitigate bias in different categories of items.<n>We show how our approach can mitigate bias on multiple sensitive attributes, including gender, age, and occupation.<n>Our results show how this approach helps mitigate social bias with little to no degradation in performance.
arXiv Detail & Related papers (2025-06-23T06:19:02Z) - Unbiased Collaborative Filtering with Fair Sampling [31.8123420283795]
We show that popularity bias arises from the influence of propensity factors during training.<n>We propose a fair sampling (FS) method that ensures each user and each item has an equal likelihood of being selected as both positive and negative instances.
arXiv Detail & Related papers (2025-02-19T15:59:49Z) - Correcting Popularity Bias in Recommender Systems via Item Loss Equalization [1.7771454131646311]
A small set of popular items dominate the recommendation results due to their high interaction rates.<n>This phenomenon disproportionately benefits users with mainstream tastes while neglecting those with niche interests.<n>We propose an in-processing approach to address this issue by intervening in the training process of recommendation models.
arXiv Detail & Related papers (2024-10-07T08:34:18Z) - Be Aware of the Neighborhood Effect: Modeling Selection Bias under Interference [50.95521705711802]
Previous studies have focused on addressing selection bias to achieve unbiased learning of the prediction model.
This paper formally formulates the neighborhood effect as an interference problem from the perspective of causal inference.
We propose a novel ideal loss that can be used to deal with selection bias in the presence of neighborhood effect.
arXiv Detail & Related papers (2024-04-30T15:20:41Z) - Going Beyond Popularity and Positivity Bias: Correcting for Multifactorial Bias in Recommender Systems [74.47680026838128]
Two typical forms of bias in user interaction data with recommender systems (RSs) are popularity bias and positivity bias.
We consider multifactorial selection bias affected by both item and rating value factors.
We propose smoothing and alternating gradient descent techniques to reduce variance and improve the robustness of its optimization.
arXiv Detail & Related papers (2024-04-29T12:18:21Z) - Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction [56.17020601803071]
Recent research shows that pre-trained language models (PLMs) suffer from "prompt bias" in factual knowledge extraction.
This paper aims to improve the reliability of existing benchmarks by thoroughly investigating and mitigating prompt bias.
arXiv Detail & Related papers (2024-03-15T02:04:35Z) - On Comparing Fair Classifiers under Data Bias [42.43344286660331]
We study the effect of varying data biases on the accuracy and fairness of fair classifiers.
Our experiments show how to integrate a measure of data bias risk in the existing fairness dashboards for real-world deployments.
arXiv Detail & Related papers (2023-02-12T13:04:46Z) - Cross Pairwise Ranking for Unbiased Item Recommendation [57.71258289870123]
We develop a new learning paradigm named Cross Pairwise Ranking (CPR)
CPR achieves unbiased recommendation without knowing the exposure mechanism.
We prove in theory that this way offsets the influence of user/item propensity on the learning.
arXiv Detail & Related papers (2022-04-26T09:20:27Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.