Addressing bias in Recommender Systems: A Case Study on Data Debiasing Techniques in Mobile Games
- URL: http://arxiv.org/abs/2411.18716v1
- Date: Wed, 27 Nov 2024 19:45:17 GMT
- Title: Addressing bias in Recommender Systems: A Case Study on Data Debiasing Techniques in Mobile Games
- Authors: Yixiong Wang, Maria Paskevich, Hui Wang,
- Abstract summary: This case study aims to identify and categorize potential bias within datasets specific to model-based recommendations in mobile games.<n>It reviews debiasing techniques in the existing literature, and assess their effectiveness on real-world data gathered through implicit feedback.
- Score: 3.18175475159604
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The mobile gaming industry, particularly the free-to-play sector, has been around for more than a decade, yet it still experiences rapid growth. The concept of games-as-service requires game developers to pay much more attention to recommendations of content in their games. With recommender systems (RS), the inevitable problem of bias in the data comes hand in hand. A lot of research has been done on the case of bias in RS for online retail or services, but much less is available for the specific case of the game industry. Also, in previous works, various debiasing techniques were tested on explicit feedback datasets, while it is much more common in mobile gaming data to only have implicit feedback. This case study aims to identify and categorize potential bias within datasets specific to model-based recommendations in mobile games, review debiasing techniques in the existing literature, and assess their effectiveness on real-world data gathered through implicit feedback. The effectiveness of these methods is then evaluated based on their debiasing quality, data requirements, and computational demands.
Related papers
- A Causal Information-Flow Framework for Unbiased Learning-to-Rank [52.54102347581931]
In web search and recommendation systems, user clicks are widely used to train ranking models.<n>We introduce a novel causal learning-based ranking framework that extends Unbiased Learning-to-Rank.<n>Our method consistently reduces measured bias leakage and improves ranking performance.
arXiv Detail & Related papers (2026-01-09T07:19:35Z) - Bias vs Bias -- Dawn of Justice: A Fair Fight in Recommendation Systems [2.124791625488617]
We propose a fairness-aware re-ranking approach that helps mitigate bias in different categories of items.<n>We show how our approach can mitigate bias on multiple sensitive attributes, including gender, age, and occupation.<n>Our results show how this approach helps mitigate social bias with little to no degradation in performance.
arXiv Detail & Related papers (2025-06-23T06:19:02Z) - Investigating Popularity Bias Amplification in Recommender Systems Employed in the Entertainment Domain [0.19036571490366497]
This work summarizes our research on investigating the amplification of popularity bias in recommender systems within the entertainment sector.
We demonstrate that an item's recommendation frequency is positively correlated with its popularity.
We aim to better understand the connection between recommendation accuracy, calibration quality of algorithms, and popularity bias amplification.
arXiv Detail & Related papers (2025-04-07T05:58:01Z) - Going Beyond Popularity and Positivity Bias: Correcting for Multifactorial Bias in Recommender Systems [74.47680026838128]
Two typical forms of bias in user interaction data with recommender systems (RSs) are popularity bias and positivity bias.
We consider multifactorial selection bias affected by both item and rating value factors.
We propose smoothing and alternating gradient descent techniques to reduce variance and improve the robustness of its optimization.
arXiv Detail & Related papers (2024-04-29T12:18:21Z) - DeNetDM: Debiasing by Network Depth Modulation [6.550893772143]
We present DeNetDM, a novel debiasing method that uses network depth modulation as a way of developing robustness to spurious correlations.
Our method requires no bias annotations or explicit data augmentation while performing on par with approaches that require either or both.
We demonstrate that DeNetDM outperforms existing debiasing techniques on both synthetic and real-world datasets by 5%.
arXiv Detail & Related papers (2024-03-28T22:17:19Z) - Take Care of Your Prompt Bias! Investigating and Mitigating Prompt Bias in Factual Knowledge Extraction [56.17020601803071]
Recent research shows that pre-trained language models (PLMs) suffer from "prompt bias" in factual knowledge extraction.
This paper aims to improve the reliability of existing benchmarks by thoroughly investigating and mitigating prompt bias.
arXiv Detail & Related papers (2024-03-15T02:04:35Z) - Targeted Data Augmentation for bias mitigation [0.0]
We introduce a novel and efficient approach for addressing biases called Targeted Data Augmentation (TDA)
Unlike the laborious task of removing biases, our method proposes to insert biases instead, resulting in improved performance.
To identify biases, we annotated two diverse datasets: a dataset of clinical skin lesions and a dataset of male and female faces.
arXiv Detail & Related papers (2023-08-22T12:25:49Z) - Data Bias Management [17.067962372238135]
We show how bias in data affects end users, where bias is originated, and provide a viewpoint about what we should do about it.
We argue that data bias is not something that should necessarily be removed in all cases, and that research attention should instead shift from bias removal to bias management.
arXiv Detail & Related papers (2023-05-15T10:07:27Z) - Whole Page Unbiased Learning to Rank [59.52040055543542]
Unbiased Learning to Rank(ULTR) algorithms are proposed to learn an unbiased ranking model with biased click data.
We propose a Bias Agnostic whole-page unbiased Learning to rank algorithm, named BAL, to automatically find the user behavior model.
Experimental results on a real-world dataset verify the effectiveness of the BAL.
arXiv Detail & Related papers (2022-10-19T16:53:08Z) - Mitigating Representation Bias in Action Recognition: Algorithms and
Benchmarks [76.35271072704384]
Deep learning models perform poorly when applied to videos with rare scenes or objects.
We tackle this problem from two different angles: algorithm and dataset.
We show that the debiased representation can generalize better when transferred to other datasets and tasks.
arXiv Detail & Related papers (2022-09-20T00:30:35Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Representation Bias in Data: A Survey on Identification and Resolution
Techniques [26.142021257838564]
Data-driven algorithms are only as good as the data they work with, while data sets, especially social data, often fail to represent minorities adequately.
Representation Bias in data can happen due to various reasons ranging from historical discrimination to selection and sampling biases in the data acquisition and preparation methods.
This paper reviews the literature on identifying and resolving representation bias as a feature of a data set, independent of how consumed later.
arXiv Detail & Related papers (2022-03-22T16:30:22Z) - Towards Measuring Bias in Image Classification [61.802949761385]
Convolutional Neural Networks (CNN) have become state-of-the-art for the main computer vision tasks.
However, due to the complex structure their decisions are hard to understand which limits their use in some context of the industrial world.
We present a systematic approach to uncover data bias by means of attribution maps.
arXiv Detail & Related papers (2021-07-01T10:50:39Z) - AutoDebias: Learning to Debias for Recommendation [43.84313723394282]
We propose textitAotoDebias that leverages another (small) set of uniform data to optimize the debiasing parameters.
We derive the generalization bound for AutoDebias and prove its ability to acquire the appropriate debiasing strategy.
arXiv Detail & Related papers (2021-05-10T08:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.