Institutionalising Ethics in AI through Broader Impact Requirements
- URL: http://arxiv.org/abs/2106.11039v1
- Date: Sun, 30 May 2021 12:36:43 GMT
- Title: Institutionalising Ethics in AI through Broader Impact Requirements
- Authors: Carina Prunkl, Carolyn Ashurst, Markus Anderljung, Helena Webb, Jan
Leike, Allan Dafoe
- Abstract summary: We reflect on a novel governance initiative by one of the world's largest AI conferences.
NeurIPS introduced a requirement for submitting authors to include a statement on the broader societal impacts of their research.
We investigate the risks, challenges and potential benefits of such an initiative.
- Score: 8.793651996676095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Turning principles into practice is one of the most pressing challenges of
artificial intelligence (AI) governance. In this article, we reflect on a novel
governance initiative by one of the world's largest AI conferences. In 2020,
the Conference on Neural Information Processing Systems (NeurIPS) introduced a
requirement for submitting authors to include a statement on the broader
societal impacts of their research. Drawing insights from similar governance
initiatives, including institutional review boards (IRBs) and impact
requirements for funding applications, we investigate the risks, challenges and
potential benefits of such an initiative. Among the challenges, we list a lack
of recognised best practice and procedural transparency, researcher opportunity
costs, institutional and social pressures, cognitive biases, and the inherently
difficult nature of the task. The potential benefits, on the other hand,
include improved anticipation and identification of impacts, better
communication with policy and governance experts, and a general strengthening
of the norms around responsible research. To maximise the chance of success, we
recommend measures to increase transparency, improve guidance, create
incentives to engage earnestly with the process, and facilitate public
deliberation on the requirement's merits and future. Perhaps the most important
contribution from this analysis are the insights we can gain regarding
effective community-based governance and the role and responsibility of the AI
research community more broadly.
Related papers
- AI and the Transformation of Accountability and Discretion in Urban Governance [1.9152655229960793]
The paper highlights AI's potential to reposition human discretion and reshape specific types of accountability.
It advances a framework for responsible AI adoption, ensuring that urban governance remains adaptive, transparent, and aligned with public values.
arXiv Detail & Related papers (2025-02-18T18:11:39Z) - Safety is Essential for Responsible Open-Ended Systems [47.172735322186]
Open-Endedness is the ability of AI systems to continuously and autonomously generate novel and diverse artifacts or solutions.
This position paper argues that the inherently dynamic and self-propagating nature of Open-Ended AI introduces significant, underexplored risks.
arXiv Detail & Related papers (2025-02-06T21:32:07Z) - Toward Ethical AI: A Qualitative Analysis of Stakeholder Perspectives [0.0]
This study explores stakeholder perspectives on privacy in AI systems, focusing on educators, parents, and AI professionals.
Using qualitative analysis of survey responses from 227 participants, the research identifies key privacy risks, including data breaches, ethical misuse, and excessive data collection.
The findings provide actionable insights into balancing the benefits of AI with robust privacy protections.
arXiv Detail & Related papers (2025-01-23T02:06:25Z) - Responsible AI Governance: A Response to UN Interim Report on Governing AI for Humanity [15.434533537570614]
The report emphasizes the transformative potential of AI in achieving the Sustainable Development Goals.
It acknowledges the need for robust governance to mitigate associated risks.
The report concludes with actionable principles for fostering responsible AI governance.
arXiv Detail & Related papers (2024-11-29T18:57:24Z) - Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure [4.578401882034969]
We focus on how model performance evaluation may inform or inhibit probing of model limitations, biases, and other risks.
Our findings can inform AI providers and legal scholars in designing interventions and policies that preserve open-source innovation while incentivizing ethical uptake.
arXiv Detail & Related papers (2024-09-27T19:09:40Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - ABI Approach: Automatic Bias Identification in Decision-Making Under Risk based in an Ontology of Behavioral Economics [46.57327530703435]
Risk seeking preferences for losses, driven by biases such as loss aversion, pose challenges and can result in severe negative consequences.
This research introduces the ABI approach, a novel solution designed to support organizational decision-makers by automatically identifying and explaining risk seeking preferences.
arXiv Detail & Related papers (2024-05-22T23:53:46Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Enhanced well-being assessment as basis for the practical implementation
of ethical and rights-based normative principles for AI [0.0]
We propose the practical application of an enhanced well-being impact assessment framework for Autonomous and Intelligent Systems.
This process could enable a human-centered algorithmically-supported approach to the understanding of the impacts of AI systems.
arXiv Detail & Related papers (2020-07-29T13:26:05Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.