Fairness in Socio-technical Systems: a Case Study of Wikipedia
- URL: http://arxiv.org/abs/2302.07787v1
- Date: Wed, 15 Feb 2023 17:16:53 GMT
- Title: Fairness in Socio-technical Systems: a Case Study of Wikipedia
- Authors: Mir Saeed Damadi and Alan Davoust
- Abstract summary: We systematically review 75 papers describing different types of bias in Wikipedia, which we classify and relate to established notions of harm from algorithmic fairness research.
We identify the normative expectations of fairness associated with the different problems and discuss the applicability of existing criteria proposed for machine learning-driven decision systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Problems broadly known as algorithmic bias frequently occur in the context of
complex socio-technical systems (STS), where observed biases may not be
directly attributable to a single automated decision algorithm. As a first
investigation of fairness in STS, we focus on the case of Wikipedia. We
systematically review 75 papers describing different types of bias in
Wikipedia, which we classify and relate to established notions of harm from
algorithmic fairness research. By analysing causal relationships between the
observed phenomena, we demonstrate the complexity of the socio-technical
processes causing harm. Finally, we identify the normative expectations of
fairness associated with the different problems and discuss the applicability
of existing criteria proposed for machine learning-driven decision systems.
Related papers
- Whither Bias Goes, I Will Go: An Integrative, Systematic Review of Algorithmic Bias Mitigation [1.0470286407954037]
Concerns have been raised that machine learning (ML) models may be biased and perpetuate or exacerbate inequality.
We present a four-stage model of developing ML assessments and applying bias mitigation methods.
arXiv Detail & Related papers (2024-10-21T02:32:14Z) - Interactive System-wise Anomaly Detection [66.3766756452743]
Anomaly detection plays a fundamental role in various applications.
It is challenging for existing methods to handle the scenarios where the instances are systems whose characteristics are not readily observed as data.
We develop an end-to-end approach which includes an encoder-decoder module that learns system embeddings.
arXiv Detail & Related papers (2023-04-21T02:20:24Z) - Fair Enough: Standardizing Evaluation and Model Selection for Fairness
Research in NLP [64.45845091719002]
Modern NLP systems exhibit a range of biases, which a growing literature on model debiasing attempts to correct.
This paper seeks to clarify the current situation and plot a course for meaningful progress in fair learning.
arXiv Detail & Related papers (2023-02-11T14:54:00Z) - Survey on Fairness Notions and Related Tensions [4.257210316104905]
Automated decision systems are increasingly used to take consequential decisions in problems such as job hiring and loan granting.
However, objective machine learning (ML) algorithms are prone to bias, which results in yet unfair decisions.
This paper surveys the commonly used fairness notions and discusses the tensions among them with privacy and accuracy.
arXiv Detail & Related papers (2022-09-16T13:36:05Z) - Investigating Bias with a Synthetic Data Generator: Empirical Evidence
and Philosophical Interpretation [66.64736150040093]
Machine learning applications are becoming increasingly pervasive in our society.
Risk is that they will systematically spread the bias embedded in data.
We propose to analyze biases by introducing a framework for generating synthetic data with specific types of bias and their combinations.
arXiv Detail & Related papers (2022-09-13T11:18:50Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Anatomizing Bias in Facial Analysis [86.79402670904338]
Existing facial analysis systems have been shown to yield biased results against certain demographic subgroups.
It has become imperative to ensure that these systems do not discriminate based on gender, identity, or skin tone of individuals.
This has led to research in the identification and mitigation of bias in AI systems.
arXiv Detail & Related papers (2021-12-13T09:51:13Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - Demographic Bias in Biometrics: A Survey on an Emerging Challenge [0.0]
Biometric systems rely on the uniqueness of certain biological or forensics characteristics of human beings.
There has been a wave of public and academic concerns regarding the existence of systemic bias in automated decision systems.
arXiv Detail & Related papers (2020-03-05T09:07:59Z) - Algorithmic Fairness [11.650381752104298]
It is crucial to develop AI algorithms that are not only accurate but also objective and fair.
Recent studies have shown that algorithmic decision-making may be inherently prone to unfairness.
arXiv Detail & Related papers (2020-01-21T19:01:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.