Algorithms are not neutral: Bias in collaborative filtering
- URL: http://arxiv.org/abs/2105.01031v1
- Date: Mon, 3 May 2021 17:28:43 GMT
- Title: Algorithms are not neutral: Bias in collaborative filtering
- Authors: Catherine Stinson
- Abstract summary: Discussions of algorithmic bias tend to focus on examples where either the data or the people building the algorithms are biased.
This is illustrated with the example of collaborative filtering, which is known to suffer from popularity, and homogenizing biases.
Popularity and homogenizing biases have the effect of further marginalizing the already marginal.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Discussions of algorithmic bias tend to focus on examples where either the
data or the people building the algorithms are biased. This gives the
impression that clean data and good intentions could eliminate bias. The
neutrality of the algorithms themselves is defended by prominent Artificial
Intelligence researchers. However, algorithms are not neutral. In addition to
biased data and biased algorithm makers, AI algorithms themselves can be
biased. This is illustrated with the example of collaborative filtering, which
is known to suffer from popularity, and homogenizing biases. Iterative
information filtering algorithms in general create a selection bias in the
course of learning from user responses to documents that the algorithm
recommended. These are not merely biases in the statistical sense; these
statistical biases can cause discriminatory outcomes. Data points on the
margins of distributions of human data tend to correspond to marginalized
people. Popularity and homogenizing biases have the effect of further
marginalizing the already marginal. This source of bias warrants serious
attention given the ubiquity of algorithmic decision-making.
Related papers
- Outlier Detection Bias Busted: Understanding Sources of Algorithmic Bias through Data-centric Factors [28.869581543676947]
unsupervised outlier detection (OD) has numerous applications in finance, security, etc.
This work aims to shed light on the possible sources of unfairness in OD by auditing detection models under different data-centric factors.
We find that the OD algorithms under the study all exhibit fairness pitfalls, although differing in which types of data bias they are more susceptible to.
arXiv Detail & Related papers (2024-08-24T20:35:32Z) - Mitigating Algorithmic Bias on Facial Expression Recognition [0.0]
Biased datasets are ubiquitous and present a challenge for machine learning.
The problem of biased datasets is especially sensitive when dealing with minority people groups.
This work explores one way to mitigate bias using a debiasing variational autoencoder with experiments on facial expression recognition.
arXiv Detail & Related papers (2023-12-23T17:41:30Z) - Whole Page Unbiased Learning to Rank [59.52040055543542]
Unbiased Learning to Rank(ULTR) algorithms are proposed to learn an unbiased ranking model with biased click data.
We propose a Bias Agnostic whole-page unbiased Learning to rank algorithm, named BAL, to automatically find the user behavior model.
Experimental results on a real-world dataset verify the effectiveness of the BAL.
arXiv Detail & Related papers (2022-10-19T16:53:08Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Choosing an algorithmic fairness metric for an online marketplace:
Detecting and quantifying algorithmic bias on LinkedIn [0.21756081703275995]
We derive an algorithmic fairness metric from the fairness notion of equal opportunity for equally qualified candidates.
We use the proposed method to measure and quantify algorithmic bias with respect to gender of two algorithms used by LinkedIn.
arXiv Detail & Related papers (2022-02-15T10:33:30Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Information-Theoretic Bias Reduction via Causal View of Spurious
Correlation [71.9123886505321]
We propose an information-theoretic bias measurement technique through a causal interpretation of spurious correlation.
We present a novel debiasing framework against the algorithmic bias, which incorporates a bias regularization loss.
The proposed bias measurement and debiasing approaches are validated in diverse realistic scenarios.
arXiv Detail & Related papers (2022-01-10T01:19:31Z) - Towards Measuring Bias in Image Classification [61.802949761385]
Convolutional Neural Networks (CNN) have become state-of-the-art for the main computer vision tasks.
However, due to the complex structure their decisions are hard to understand which limits their use in some context of the industrial world.
We present a systematic approach to uncover data bias by means of attribution maps.
arXiv Detail & Related papers (2021-07-01T10:50:39Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z) - Underestimation Bias and Underfitting in Machine Learning [2.639737913330821]
What is termed algorithmic bias in machine learning will be due to historic bias in the training data.
Sometimes the bias may be introduced (or at least exacerbated) by the algorithm itself.
In this paper we report on initial research to understand the factors that contribute to bias in classification algorithms.
arXiv Detail & Related papers (2020-05-18T20:01:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.