ESR: Ethics and Society Review of Artificial Intelligence Research
- URL: http://arxiv.org/abs/2106.11521v2
- Date: Fri, 9 Jul 2021 23:08:22 GMT
- Title: ESR: Ethics and Society Review of Artificial Intelligence Research
- Authors: Michael S. Bernstein, Margaret Levi, David Magnus, Betsy Rajala, Debra
Satz, Charla Waeiss
- Abstract summary: We have developed the Ethics and Society Review board (ESR), a feedback panel that works with researchers to mitigate negative ethical and societal aspects of AI research.
This article describes the ESR as we have designed and run it over its first year across 41 proposals.
We analyze aggregate ESR feedback on these proposals, finding that the panel most commonly identifies issues of harms to minority groups, inclusion of diverse stakeholders in the research plan, dual use, and representation in data.
- Score: 11.4503292891152
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial intelligence (AI) research is routinely criticized for its real
and potential impacts on society, and we lack adequate institutional responses
to this criticism and to the responsibility that it reflects. AI research often
falls outside the purview of existing feedback mechanisms such as the
Institutional Review Board (IRB), which are designed to evaluate harms to human
subjects rather than harms to human society. In response, we have developed the
Ethics and Society Review board (ESR), a feedback panel that works with
researchers to mitigate negative ethical and societal aspects of AI research.
The ESR's main insight is to serve as a requirement for funding: researchers
cannot receive grant funding from a major AI funding program at our university
until the researchers complete the ESR process for the proposal. In this
article, we describe the ESR as we have designed and run it over its first year
across 41 proposals. We analyze aggregate ESR feedback on these proposals,
finding that the panel most commonly identifies issues of harms to minority
groups, inclusion of diverse stakeholders in the research plan, dual use, and
representation in data. Surveys and interviews of researchers who interacted
with the ESR found that 58% felt that it had influenced the design of their
research project, 100% are willing to continue submitting future projects to
the ESR, and that they sought additional scaffolding for reasoning through
ethics and society issues.
Related papers
- Surveys Considered Harmful? Reflecting on the Use of Surveys in AI Research, Development, and Governance [21.430179253600308]
We find that public surveys on AI topics are vulnerable to specific Western knowledge, values, and assumptions in their design.
We distill provocations and questions for our community to recognize the limitations of surveys for meeting the goals of engagement.
arXiv Detail & Related papers (2024-07-26T22:10:49Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - A Safe Harbor for AI Evaluation and Red Teaming [124.89885800509505]
Some researchers fear that conducting such research or releasing their findings will result in account suspensions or legal reprisal.
We propose that major AI developers commit to providing a legal and technical safe harbor.
We believe these commitments are a necessary step towards more inclusive and unimpeded community efforts to tackle the risks of generative AI.
arXiv Detail & Related papers (2024-03-07T20:55:08Z) - Investigating Responsible AI for Scientific Research: An Empirical Study [4.597781832707524]
The push for Responsible AI (RAI) in such institutions underscores the increasing emphasis on integrating ethical considerations within AI design and development.
This paper aims to assess the awareness and preparedness regarding the ethical risks inherent in AI design and development.
Our results have revealed certain knowledge gaps concerning ethical, responsible, and inclusive AI, with limitations in awareness of the available AI ethics frameworks.
arXiv Detail & Related papers (2023-12-15T06:40:27Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - The Equitable AI Research Roundtable (EARR): Towards Community-Based
Decision Making in Responsible AI Development [4.1986677342209004]
The paper reports on our initial evaluation of The Equitable AI Research Roundtable.
EARR was created in collaboration among a large tech firm, nonprofits, NGO research institutions, and universities.
We outline three principles in practice of how EARR has operated thus far that are especially relevant to the concerns of the FAccT community.
arXiv Detail & Related papers (2023-03-14T18:57:20Z) - Human-Centered Responsible Artificial Intelligence: Current & Future
Trends [76.94037394832931]
In recent years, the CHI community has seen significant growth in research on Human-Centered Responsible Artificial Intelligence.
All of this work is aimed at developing AI that benefits humanity while being grounded in human rights and ethics, and reducing the potential harms of AI.
In this special interest group, we aim to bring together researchers from academia and industry interested in these topics to map current and future research trends.
arXiv Detail & Related papers (2023-02-16T08:59:42Z) - Ethics for social robotics: A critical analysis [8.778914180886835]
Social robotics development for the practice of care and European prospects to incorporate these AI-based systems in institutional healthcare contexts call for an urgent ethical reflection.
Despite the growing attention to the ethical implications of social robotics, the current debate on one of its central branches, social assistive robotics (SAR), rests upon an impoverished ethical approach.
This paper presents and examines some tendencies of this prevailing approach, which have been identified as a result of a critical literature review.
arXiv Detail & Related papers (2022-07-25T22:18:00Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Stakeholder Participation in AI: Beyond "Add Diverse Stakeholders and
Stir" [76.44130385507894]
This paper aims to ground what we dub a 'participatory turn' in AI design by synthesizing existing literature on participation and through empirical analysis of its current practices.
Based on our literature synthesis and empirical research, this paper presents a conceptual framework for analyzing participatory approaches to AI design.
arXiv Detail & Related papers (2021-11-01T17:57:04Z) - Nose to Glass: Looking In to Get Beyond [0.0]
An increasing amount of research has been conducted under the banner of enhancing responsible artificial intelligence.
Research aims to address, alleviating, and eventually mitigating the harms brought on by the roll out of algorithmic systems.
However, implementation of such tools remains low.
arXiv Detail & Related papers (2020-11-26T06:51:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.