Queering the ethics of AI
- URL: http://arxiv.org/abs/2308.13591v1
- Date: Fri, 25 Aug 2023 17:26:05 GMT
- Title: Queering the ethics of AI
- Authors: Eduard Fosch-Villaronga and Gianclaudio Malgieri
- Abstract summary: The chapter emphasizes the ethical concerns surrounding the potential for AI to perpetuate discrimination.
The chapter argues that a critical examination of the conception of equality that often underpins non-discrimination law is necessary.
- Score: 0.6993026261767287
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This book chapter delves into the pressing need to "queer" the ethics of AI
to challenge and re-evaluate the normative suppositions and values that
underlie AI systems. The chapter emphasizes the ethical concerns surrounding
the potential for AI to perpetuate discrimination, including binarism, and
amplify existing inequalities due to the lack of representative datasets and
the affordances and constraints depending on technology readiness. The chapter
argues that a critical examination of the neoliberal conception of equality
that often underpins non-discrimination law is necessary and cannot stress more
the need to create alternative interdisciplinary approaches that consider the
complex and intersecting factors that shape individuals' experiences of
discrimination. By exploring such approaches centering on intersectionality and
vulnerability-informed design, the chapter contends that designers and
developers can create more ethical AI systems that are inclusive, equitable,
and responsive to the needs and experiences of all individuals and communities,
particularly those who are most vulnerable to discrimination and harm.
Related papers
- The Impossibility of Fair LLMs [59.424918263776284]
The need for fair AI is increasingly clear in the era of large language models (LLMs)
We review the technical frameworks that machine learning researchers have used to evaluate fairness.
We develop guidelines for the more realistic goal of achieving fairness in particular use cases.
arXiv Detail & Related papers (2024-05-28T04:36:15Z) - AI Fairness in Practice [0.46671368497079174]
There is a broad spectrum of views across society on what the concept of fairness means and how it should be put to practice.
This workbook explores how a context-based approach to understanding AI Fairness can help project teams better identify, mitigate, and manage the many ways that unfair bias and discrimination can crop up across the AI project workflow.
arXiv Detail & Related papers (2024-02-19T23:02:56Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Beneficent Intelligence: A Capability Approach to Modeling Benefit,
Assistance, and Associated Moral Failures through AI Systems [12.239090962956043]
The prevailing discourse around AI ethics lacks the language and formalism necessary to capture the diverse ethical concerns that emerge when AI systems interact with individuals.
We present a framework formalizing a network of ethical concepts and entitlements necessary for AI systems to confer meaningful benefit or assistance to stakeholders.
arXiv Detail & Related papers (2023-08-01T22:38:14Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Causal Fairness Analysis [68.12191782657437]
We introduce a framework for understanding, modeling, and possibly solving issues of fairness in decision-making settings.
The main insight of our approach will be to link the quantification of the disparities present on the observed data with the underlying, and often unobserved, collection of causal mechanisms.
Our effort culminates in the Fairness Map, which is the first systematic attempt to organize and explain the relationship between different criteria found in the literature.
arXiv Detail & Related papers (2022-07-23T01:06:34Z) - Tackling Algorithmic Disability Discrimination in the Hiring Process: An
Ethical, Legal and Technical Analysis [2.294014185517203]
We discuss concerns and opportunities raised by AI-driven hiring in relation to disability discrimination.
We establish some starting points and design a roadmap for ethicists, lawmakers, advocates as well as AI practitioners alike.
arXiv Detail & Related papers (2022-06-13T13:32:37Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Bias and Discrimination in AI: a cross-disciplinary perspective [5.190307793476366]
We show that finding solutions to bias and discrimination in AI requires robust cross-disciplinary collaborations.
We survey relevant literature about bias and discrimination in AI from an interdisciplinary perspective that embeds technical, legal, social and ethical dimensions.
arXiv Detail & Related papers (2020-08-11T10:02:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.