Exposing Algorithmic Discrimination and Its Consequences in Modern
Society: Insights from a Scoping Study
- URL: http://arxiv.org/abs/2312.04832v2
- Date: Wed, 17 Jan 2024 02:11:37 GMT
- Title: Exposing Algorithmic Discrimination and Its Consequences in Modern
Society: Insights from a Scoping Study
- Authors: Ramandeep Singh Dehal, Mehak Sharma, Ronnie de Souza Santos
- Abstract summary: This study delves into various studies published over the years reporting algorithmic discrimination.
We aim to support software engineering researchers and practitioners in addressing this issue.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Algorithmic discrimination is a condition that arises when data-driven
software unfairly treats users based on attributes like ethnicity, race,
gender, sexual orientation, religion, age, disability, or other personal
characteristics. Nowadays, as machine learning gains popularity, cases of
algorithmic discrimination are increasingly being reported in several contexts.
This study delves into various studies published over the years reporting
algorithmic discrimination. We aim to support software engineering researchers
and practitioners in addressing this issue by discussing key characteristics of
the problem
Related papers
- Testing software for non-discrimination: an updated and extended audit in the Italian car insurance domain [40.846175330181005]
Fairness in pricing algorithms grants equitable access to basic services without discriminating on the basis of protected attributes.
We replicate a previous empirical study that used black box testing to audit pricing algorithms used by Italian car insurance companies.
arXiv Detail & Related papers (2025-02-10T13:16:01Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Multi-dimensional discrimination in Law and Machine Learning -- A
comparative overview [14.650860450187793]
Domain of fairness-aware machine learning focuses on methods and algorithms for understanding, mitigating, and accounting for bias in AI/ML models.
In reality, human identities are multi-dimensional, and discrimination can occur based on more than one protected characteristic.
Recent approaches in this direction mainly follow the so-called intersectional fairness definition from the legal domain.
arXiv Detail & Related papers (2023-02-12T20:41:58Z) - Developing a Philosophical Framework for Fair Machine Learning: Lessons
From The Case of Algorithmic Collusion [0.0]
As machine learning algorithms are applied in new contexts the harms and injustices that result are qualitatively different.
The existing research paradigm in machine learning which develops metrics and definitions of fairness cannot account for these qualitatively different types of injustice.
I propose an ethical framework for researchers and practitioners in machine learning seeking to develop and apply fairness metrics.
arXiv Detail & Related papers (2022-07-05T16:21:56Z) - Discrimination in machine learning algorithms [0.0]
Machine learning algorithms are routinely used for business decisions that may directly affect individuals, for example, because a credit scoring algorithm refuses them a loan.
It is then relevant from an ethical (and legal) point of view to ensure that these algorithms do not discriminate based on sensitive attributes (like sex or race), which may occur unwittingly and unknowingly by the operator and the management.
arXiv Detail & Related papers (2022-06-30T21:35:42Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Beyond Algorithmic Bias: A Socio-Computational Interrogation of the
Google Search by Image Algorithm [0.799536002595393]
We audit the algorithm by presenting it with more than 40 thousands faces of all ages and more than four races.
We find that the algorithm reproduces white male patriarchal structures, often simplifying, stereotyping and discriminating females and non-white individuals.
arXiv Detail & Related papers (2021-05-26T21:40:43Z) - A Survey of Embedding Space Alignment Methods for Language and Knowledge
Graphs [77.34726150561087]
We survey the current research landscape on word, sentence and knowledge graph embedding algorithms.
We provide a classification of the relevant alignment techniques and discuss benchmark datasets used in this field of research.
arXiv Detail & Related papers (2020-10-26T16:08:13Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - FairCVtest Demo: Understanding Bias in Multimodal Learning with a
Testbed in Fair Automatic Recruitment [79.23531577235887]
This demo shows the capacity of the Artificial Intelligence (AI) behind a recruitment tool to extract sensitive information from unstructured data.
Aditionally, the demo includes a new algorithm for discrimination-aware learning which eliminates sensitive information in our multimodal AI framework.
arXiv Detail & Related papers (2020-09-12T17:45:09Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.