Fair Enough? A map of the current limitations of the requirements to
have "fair" algorithms
- URL: http://arxiv.org/abs/2311.12435v2
- Date: Sun, 17 Dec 2023 08:45:26 GMT
- Title: Fair Enough? A map of the current limitations of the requirements to
have "fair" algorithms
- Authors: Alessandro Castelnovo, Nicole Inverardi, Gabriele Nanino, Ilaria
Giuseppina Penco, Daniele Regoli
- Abstract summary: Automated Decision-Making systems can perpetuate or even amplifying bias and unjust disparities.
It has prompted more and more layers of society, including policy makers, to call for "fair" algorithms.
There is a hiatus between what the society is demanding from Automated Decision-Making systems, and what this demand actually means in real-world scenarios.
- Score: 46.20942922922006
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In the recent years, the raise in the usage and efficiency of Artificial
Intelligence and, more in general, of Automated Decision-Making systems has
brought with it an increasing and welcome awareness of the risks associated
with such systems. One of such risks is that of perpetuating or even amplifying
bias and unjust disparities present in the data from which many of these
systems learn to adjust and optimise their decisions. This awareness has on one
side encouraged several scientific communities to come up with more and more
appropriate ways and methods to assess, quantify, and possibly mitigate such
biases and disparities. On the other hand, it has prompted more and more layers
of society, including policy makers, to call for "fair" algorithms. We believe
that while a lot of excellent and multidisciplinary research is currently being
conducted, what is still fundamentally missing is the awareness that having
"fair" algorithms is per se a nearly meaningless requirement, that needs to be
complemented with a lot of additional societal choices to become actionable.
Namely, there is a hiatus between what the society is demanding from Automated
Decision-Making systems, and what this demand actually means in real-world
scenarios. In this work, we outline the key features of such a hiatus, and
pinpoint a list of fundamental ambiguities and attention points that we as a
society must address in order to give a concrete meaning to the increasing
demand of fairness in Automated Decision-Making systems.
Related papers
- The Problem of Algorithmic Collisions: Mitigating Unforeseen Risks in a Connected World [2.8775022881551666]
The increasing deployment of Artificial Intelligence (AI) and other autonomous algorithmic systems presents the world with new systemic risks.<n>Current governance frameworks are inadequate as they lack visibility into this complex ecosystem of interactions.<n>This paper outlines the nature of this challenge and proposes some initial policy suggestions centered on increasing transparency and accountability through phased system registration, a licensing framework for deployment, and enhanced monitoring capabilities.
arXiv Detail & Related papers (2025-05-26T16:22:18Z) - Algorithmic Fairness: A Tolerance Perspective [31.882207568746168]
This survey delves into the existing literature on algorithmic fairness, specifically highlighting its multifaceted social consequences.
We introduce a novel taxonomy based on 'tolerance', a term we define as the degree to which variations in fairness outcomes are acceptable.
Our systematic review covers diverse industries, revealing critical insights into the balance between algorithmic decision making and social equity.
arXiv Detail & Related papers (2024-04-26T08:16:54Z) - Adaptive reinforcement learning of multi-agent ethically-aligned
behaviours: the QSOM and QDSOM algorithms [0.9238700679836853]
We present two algorithms, named QSOM and QDSOM, which are able to adapt to changes in the environment.
We evaluate them on a use-case of multi-agent energy repartition within a small Smart Grid neighborhood.
arXiv Detail & Related papers (2023-07-02T12:22:02Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Systemic Fairness [5.833272638548154]
This paper develops formalisms for firm versus systemic fairness in machine learning algorithms.
It calls for a greater focus in the algorithmic fairness literature on ecosystem-wide fairness in real-world contexts.
arXiv Detail & Related papers (2023-04-14T02:24:55Z) - Harms from Increasingly Agentic Algorithmic Systems [21.613581713046464]
Research in Fairness, Accountability, Transparency, and Ethics (FATE) has established many sources and forms of algorithmic harm.
Despite ongoing harms, new systems are being developed and deployed which threaten the perpetuation of the same harms.
arXiv Detail & Related papers (2023-02-20T21:42:41Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Multiscale Governance [0.0]
Humandemics will propagate because of the pathways that connect the different systems.
The emerging fragility or robustness of the system will depend on how this complex network of systems is governed.
arXiv Detail & Related papers (2021-04-06T19:23:44Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z) - Public Bayesian Persuasion: Being Almost Optimal and Almost Persuasive [57.47546090379434]
We study the public persuasion problem in the general setting with: (i) arbitrary state spaces; (ii) arbitrary action spaces; (iii) arbitrary sender's utility functions.
We provide a quasi-polynomial time bi-criteria approximation algorithm for arbitrary public persuasion problems that, in specific settings, yields a QPTAS.
arXiv Detail & Related papers (2020-02-12T18:59:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.