Fairness as Equality of Opportunity: Normative Guidance from Political
Philosophy
- URL: http://arxiv.org/abs/2106.08259v1
- Date: Tue, 15 Jun 2021 16:07:58 GMT
- Title: Fairness as Equality of Opportunity: Normative Guidance from Political
Philosophy
- Authors: Falaah Arif Khan, Eleni Manis, Julia Stoyanovich
- Abstract summary: We introduce a taxonomy of fairness ideals using doctrines of Equality of Opportunity (EOP) from political philosophy.
We clarify their conceptions in philosophy and the proposed codification in fair machine learning.
We use our fairness-as-EOP framework to re-interpret the impossibility results from a philosophical perspective.
- Score: 8.228275343025462
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent interest in codifying fairness in Automated Decision Systems (ADS) has
resulted in a wide range of formulations of what it means for an algorithmic
system to be fair. Most of these propositions are inspired by, but inadequately
grounded in, political philosophy scholarship. This paper aims to correct that
deficit. We introduce a taxonomy of fairness ideals using doctrines of Equality
of Opportunity (EOP) from political philosophy, clarifying their conceptions in
philosophy and the proposed codification in fair machine learning. We arrange
these fairness ideals onto an EOP spectrum, which serves as a useful frame to
guide the design of a fair ADS in a given context.
We use our fairness-as-EOP framework to re-interpret the impossibility
results from a philosophical perspective, as the in-compatibility between
different value systems, and demonstrate the utility of the framework with
several real-world and hypothetical examples. Through our EOP-framework we hope
to answer what it means for an ADS to be fair from a moral and political
philosophy standpoint, and to pave the way for similar scholarship from ethics
and legal experts.
Related papers
- AI Fairness in Practice [0.46671368497079174]
There is a broad spectrum of views across society on what the concept of fairness means and how it should be put to practice.
This workbook explores how a context-based approach to understanding AI Fairness can help project teams better identify, mitigate, and manage the many ways that unfair bias and discrimination can crop up across the AI project workflow.
arXiv Detail & Related papers (2024-02-19T23:02:56Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - When fairness is an abstraction: Equity and AI in Swedish compulsory
education [0.23967405016776386]
Artificial intelligence experts often question whether AI is fair. They view fairness as a property of AI systems rather than of sociopolitical and economic systems.
This paper emphasizes the need to be fair in the social, political, and economic contexts within which an educational system operates and uses AI.
arXiv Detail & Related papers (2023-11-03T10:52:16Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Causal Fairness for Outcome Control [68.12191782657437]
We study a specific decision-making task called outcome control in which an automated system aims to optimize an outcome variable $Y$ while being fair and equitable.
In this paper, we first analyze through causal lenses the notion of benefit, which captures how much a specific individual would benefit from a positive decision.
We then note that the benefit itself may be influenced by the protected attribute, and propose causal tools which can be used to analyze this.
arXiv Detail & Related papers (2023-06-08T09:31:18Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Towards Substantive Conceptions of Algorithmic Fairness: Normative
Guidance from Equal Opportunity Doctrines [6.751310968561177]
We use Equal Oppportunity doctrines from political philosophy to make explicit the normative judgements embedded in different conceptions of algorithmic fairness.
We use this taxonomy to provide a moral interpretation of the impossibility results as the incompatibility between different conceptions of a fair contest.
arXiv Detail & Related papers (2022-07-06T18:37:37Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Getting Fairness Right: Towards a Toolbox for Practitioners [2.4364387374267427]
The potential risk of AI systems unintentionally embedding and reproducing bias has attracted the attention of machine learning practitioners and society at large.
This paper proposes to draft a toolbox which helps practitioners to ensure fair AI practices.
arXiv Detail & Related papers (2020-03-15T20:53:50Z) - Algorithmic Fairness from a Non-ideal Perspective [26.13086713244309]
We argue that the increasingly apparent shortcomings of proposed fair machine learning algorithms reflect broader troubles faced by the ideal approach.
We conclude with a critical discussion of the harms of misguided solutions, a reinterpretation of impossibility results, and directions for future research.
arXiv Detail & Related papers (2020-01-08T18:44:41Z) - On Consequentialism and Fairness [64.35872952140677]
We provide a consequentialist critique of common definitions of fairness within machine learning.
We conclude with a broader discussion of the issues of learning and randomization.
arXiv Detail & Related papers (2020-01-02T05:39:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.