Power and Play: Investigating "License to Critique" in Teams' AI Ethics Discussions
- URL: http://arxiv.org/abs/2403.19049v2
- Date: Mon, 8 Apr 2024 14:43:45 GMT
- Title: Power and Play: Investigating "License to Critique" in Teams' AI Ethics Discussions
- Authors: David Gray Widder, Laura Dabbish, James Herbsleb, Nikolas Martelaro,
- Abstract summary: We investigate how standards enact discursive closure and how power relations affect whether and how people raise critique.
We report on how particular affordances of a game may influence discussion, and find that the hypothetical context is unlikely to be a viable mechanism for real world change.
We discuss our finding that games are unlikely to enable direct changes to products or practice, but may be more likely to allow members to find critically-aligned allies for future collective action.
- Score: 8.149400593848387
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Past work has sought to design AI ethics interventions--such as checklists or toolkits--to help practitioners design more ethical AI systems. However, other work demonstrates how these interventions may instead serve to limit critique to that addressed within the intervention, while rendering broader concerns illegitimate. In this paper, drawing on work examining how standards enact discursive closure and how power relations affect whether and how people raise critique, we recruit three corporate teams, and one activist team, each with prior context working with one another, to play a game designed to trigger broad discussion around AI ethics. We use this as a point of contrast to trigger reflection on their teams' past discussions, examining factors which may affect their "license to critique" in AI ethics discussions. We then report on how particular affordances of this game may influence discussion, and find that the hypothetical context created in the game is unlikely to be a viable mechanism for real world change. We discuss how power dynamics within a group and notions of "scope" affect whether people may be willing to raise critique in AI ethics discussions, and discuss our finding that games are unlikely to enable direct changes to products or practice, but may be more likely to allow members to find critically-aligned allies for future collective action.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Epistemic Power in AI Ethics Labor: Legitimizing Located Complaints [0.7252027234425334]
This paper is based on 75 interviews with technologists including researchers, developers, open source contributors, and activists.
I show how some AI ethics practices have reached toward authority from automation and quantification, while those based on richly embodied and situated lived experience have not.
arXiv Detail & Related papers (2024-02-13T02:07:03Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Towards a Feminist Metaethics of AI [0.0]
I argue that these insufficiencies could be mitigated by developing a research agenda for a feminist metaethics of AI.
Applying this perspective to the context of AI, I suggest that a feminist metaethics of AI would examine: (i) the continuity between theory and action in AI ethics; (ii) the real-life effects of AI ethics; (iii) the role and profile of those involved in AI ethics; and (iv) the effects of AI on power relations through methods that pay attention to context, emotions and narrative.
arXiv Detail & Related papers (2023-11-10T13:26:45Z) - The Ethics of AI in Games [4.691076280925923]
Video games are one of the richest and most popular forms of human-computer interaction.
As artificial intelligence (AI) tools are gradually adopted by the game industry a series of ethical concerns arise.
This paper calls for an open dialogue and action for the games of today and the virtual spaces of the future.
arXiv Detail & Related papers (2023-05-12T11:41:05Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Prismal view of ethics [0.0]
We want to connect ethics to games, talk about the performance of ethics, introduce curiosity into the interplay between competing and coordinating in well-performing ethics.
This analysis is the first step toward finding modeling aspects that might be used in AI ethics for integrating modern AI systems into human society.
arXiv Detail & Related papers (2022-05-26T13:52:34Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - A Word on Machine Ethics: A Response to Jiang et al. (2021) [36.955224006838584]
We focus on a single case study of the recently proposed Delphi model and offer a critique of the project's proposed method of automating morality judgments.
We conclude with a discussion of how machine ethics could usefully proceed, by focusing on current and near-future uses of technology.
arXiv Detail & Related papers (2021-11-07T19:31:51Z) - The Threat of Offensive AI to Organizations [52.011307264694665]
This survey explores the threat of offensive AI on organizations.
First, we discuss how AI changes the adversary's methods, strategies, goals, and overall attack model.
Then, through a literature review, we identify 33 offensive AI capabilities which adversaries can use to enhance their attacks.
arXiv Detail & Related papers (2021-06-30T01:03:28Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.