Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints
- URL: http://arxiv.org/abs/2402.08171v4
- Date: Wed, 17 Apr 2024 18:34:09 GMT
- Title: Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints
- Authors: David Gray Widder,
- Abstract summary: This paper is based on 75 interviews with technologists including researchers, developers, open source contributors, and activists.
I show how some AI ethics practices have reached toward authority from automation and quantification, while those based on richly embodied and situated lived experience have not.
- Score: 0.7252027234425334
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: What counts as legitimate AI ethics labor, and consequently, what are the epistemic terms on which AI ethics claims are rendered legitimate? Based on 75 interviews with technologists including researchers, developers, open source contributors, and activists, this paper explores the various epistemic bases from which AI ethics is discussed and practiced. In the context of outside attacks on AI ethics as an impediment to "progress," I show how some AI ethics practices have reached toward authority from automation and quantification, and achieved some legitimacy as a result, while those based on richly embodied and situated lived experience have not. This paper draws together the work of feminist Anthropology and Science and Technology Studies scholars Diana Forsythe and Lucy Suchman with the works of postcolonial feminist theorist Sara Ahmed and Black feminist theorist Kristie Dotson to examine the implications of dominant AI ethics practices. By entrenching the epistemic power of quantification, dominant AI ethics practices -- employing Model Cards and similar interventions -- risk legitimizing AI ethics as a project in equal and opposite measure to which they marginalize embodied lived experience as a legitimate part of the same project. In response, I propose humble technical practices: quantified or technical practices which specifically seek to make their epistemic limits clear in order to flatten hierarchies of epistemic power.
Related papers
- Quelle {é}thique pour quelle IA ? [0.0]
This study proposes an analysis of the different types of ethical approaches involved in the ethics of AI.
The author introduces to the contemporary need for and meaning of ethics, distinguishes it from other registers of normativities and underlines its inadequacy to formalization.
The study concludes with a reflection on the reasons why a human ethics of AI based on a pragmatic practice of contextual ethics remains necessary and irreducible to any formalization or automated treatment of the ethical questions that arise for humans.
arXiv Detail & Related papers (2024-05-21T08:13:02Z) - Responsible AI Research Needs Impact Statements Too [51.37368267352821]
Work in responsible artificial intelligence (RAI), ethical AI, or ethics in AI is no exception.
Work in responsible artificial intelligence (RAI), ethical AI, or ethics in AI is no exception.
arXiv Detail & Related papers (2023-11-20T14:02:28Z) - Towards a Feminist Metaethics of AI [0.0]
I argue that these insufficiencies could be mitigated by developing a research agenda for a feminist metaethics of AI.
Applying this perspective to the context of AI, I suggest that a feminist metaethics of AI would examine: (i) the continuity between theory and action in AI ethics; (ii) the real-life effects of AI ethics; (iii) the role and profile of those involved in AI ethics; and (iv) the effects of AI on power relations through methods that pay attention to context, emotions and narrative.
arXiv Detail & Related papers (2023-11-10T13:26:45Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - Ethics in AI through the Practitioner's View: A Grounded Theory
Literature Review [12.941478155592502]
In recent years, numerous incidents have raised the profile of ethical issues in AI development and led to public concerns about the proliferation of AI technology in our everyday lives.
We conducted a grounded theory literature review (GTLR) of 38 primary empirical studies that included AI practitioners' views on ethics in AI.
We present a taxonomy of ethics in AI from practitioners' viewpoints to assist AI practitioners in identifying and understanding the different aspects of AI ethics.
arXiv Detail & Related papers (2022-06-20T00:28:51Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Ethics as a service: a pragmatic operationalisation of AI Ethics [1.1083289076967895]
gap exists between theory of AI ethics principles and the practical design of AI systems.
This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited.
arXiv Detail & Related papers (2021-02-11T21:29:25Z) - AI virtues -- The missing link in putting AI ethics into practice [0.0]
The paper defines four basic AI virtues, namely justice, honesty, responsibility and care.
It defines two second-order AI virtues, prudence and fortitude, that bolster achieving the basic virtues.
arXiv Detail & Related papers (2020-11-25T14:14:47Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.