Human participants in AI research: Ethics and transparency in practice
- URL: http://arxiv.org/abs/2311.01254v2
- Date: Sun, 21 Apr 2024 11:01:29 GMT
- Title: Human participants in AI research: Ethics and transparency in practice
- Authors: Kevin R. McKee,
- Abstract summary: Research involving human participants has been critical to advances in AI and machine learning.
Paper aims to bridge differences between AI research and related fields that involve human participants.
- Score: 0.9608936085613567
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, research involving human participants has been critical to advances in artificial intelligence (AI) and machine learning (ML), particularly in the areas of conversational, human-compatible, and cooperative AI. For example, around 12% and 6% of publications at recent AAAI and NeurIPS conferences indicate the collection of original human data, respectively. Yet AI and ML researchers lack guidelines for ethical, transparent research practices with human participants. Fewer than one out of every four of these AAAI and NeurIPS papers provide details of ethical review, the collection of informed consent, or participant compensation. This paper aims to bridge this gap by exploring normative similarities and differences between AI research and related fields that involve human participants. Though psychology, human-computer interaction, and other adjacent fields offer historic lessons and helpful insights, AI research raises several specific concerns$\unicode{x2014}$namely, participatory design, crowdsourced dataset development, and an expansive role of corporations$\unicode{x2014}$that necessitate a contextual ethics framework. To address these concerns, this paper outlines a set of guidelines for ethical and transparent practice with human participants in AI and ML research. These guidelines can be found in Section 4 on pp. 4$\unicode{x2013}$7.
Related papers
- AI Ethics: A Bibliometric Analysis, Critical Issues, and Key Gaps [3.8214695776749013]
This study conducts a comprehensive bibliometric analysis of the AI ethics literature over the past two decades.
They present seven key AI ethics issues, encompassing the Collingridge dilemma, the AI status debate, challenges associated with AI transparency and explainability, privacy protection complications, considerations of justice and fairness, concerns about algocracy and human enfeeblement, and the issue of superintelligence.
arXiv Detail & Related papers (2024-03-12T21:43:21Z) - The Participatory Turn in AI Design: Theoretical Foundations and the
Current State of Practice [64.29355073494125]
This article aims to ground what we dub the "participatory turn" in AI design by synthesizing existing theoretical literature on participation.
We articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners.
arXiv Detail & Related papers (2023-10-02T05:30:42Z) - Human-AI Coevolution [48.74579595505374]
Coevolution AI is a process in which humans and AI algorithms continuously influence each other.
This paper introduces Coevolution AI as the cornerstone for a new field of study at the intersection between AI and complexity science.
arXiv Detail & Related papers (2023-06-23T18:10:54Z) - Artificial intelligence adoption in the physical sciences, natural
sciences, life sciences, social sciences and the arts and humanities: A
bibliometric analysis of research publications from 1960-2021 [73.06361680847708]
In 1960 14% of 333 research fields were related to AI, but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
In 1960 14% of 333 research fields were related to AI (many in computer science), but this increased to over half of all research fields by 1972, over 80% by 1986 and over 98% in current times.
We conclude that the context of the current surge appears different, and that interdisciplinary AI application is likely to be sustained.
arXiv Detail & Related papers (2023-06-15T14:08:07Z) - The ethical ambiguity of AI data enrichment: Measuring gaps in research
ethics norms and practices [2.28438857884398]
This study explores how, and to what extent, comparable research ethics requirements and norms have developed for AI research and data enrichment.
Leading AI venues have begun to establish protocols for human data collection, but these are are inconsistently followed by authors.
arXiv Detail & Related papers (2023-06-01T16:12:55Z) - Science in the Era of ChatGPT, Large Language Models and Generative AI:
Challenges for Research Ethics and How to Respond [3.3504365823045044]
This paper reviews challenges, ethical and integrity risks in science conduct in the advent of generative AI.
The role of AI language models as a research instrument and subject is scrutinized along with ethical implications for scientists, participants and reviewers.
arXiv Detail & Related papers (2023-05-24T16:23:46Z) - Human-Centered Responsible Artificial Intelligence: Current & Future
Trends [76.94037394832931]
In recent years, the CHI community has seen significant growth in research on Human-Centered Responsible Artificial Intelligence.
All of this work is aimed at developing AI that benefits humanity while being grounded in human rights and ethics, and reducing the potential harms of AI.
In this special interest group, we aim to bring together researchers from academia and industry interested in these topics to map current and future research trends.
arXiv Detail & Related papers (2023-02-16T08:59:42Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Ethics in AI through the Practitioner's View: A Grounded Theory
Literature Review [12.941478155592502]
In recent years, numerous incidents have raised the profile of ethical issues in AI development and led to public concerns about the proliferation of AI technology in our everyday lives.
We conducted a grounded theory literature review (GTLR) of 38 primary empirical studies that included AI practitioners' views on ethics in AI.
We present a taxonomy of ethics in AI from practitioners' viewpoints to assist AI practitioners in identifying and understanding the different aspects of AI ethics.
arXiv Detail & Related papers (2022-06-20T00:28:51Z) - Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
Stir" [76.44130385507894]
This paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices.
Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design.
arXiv Detail & Related papers (2021-11-01T17:57:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.