Institutionalising Ethics in AI through Broader Impact Requirements
- URL: http://arxiv.org/abs/2106.11039v1
- Date: Sun, 30 May 2021 12:36:43 GMT
- Title: Institutionalising Ethics in AI through Broader Impact Requirements
- Authors: Carina Prunkl, Carolyn Ashurst, Markus Anderljung, Helena Webb, Jan
Leike, Allan Dafoe
- Abstract summary: We reflect on a novel governance initiative by one of the world's largest AI conferences.
NeurIPS introduced a requirement for submitting authors to include a statement on the broader societal impacts of their research.
We investigate the risks, challenges and potential benefits of such an initiative.
- Score: 8.793651996676095
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Turning principles into practice is one of the most pressing challenges of
artificial intelligence (AI) governance. In this article, we reflect on a novel
governance initiative by one of the world's largest AI conferences. In 2020,
the Conference on Neural Information Processing Systems (NeurIPS) introduced a
requirement for submitting authors to include a statement on the broader
societal impacts of their research. Drawing insights from similar governance
initiatives, including institutional review boards (IRBs) and impact
requirements for funding applications, we investigate the risks, challenges and
potential benefits of such an initiative. Among the challenges, we list a lack
of recognised best practice and procedural transparency, researcher opportunity
costs, institutional and social pressures, cognitive biases, and the inherently
difficult nature of the task. The potential benefits, on the other hand,
include improved anticipation and identification of impacts, better
communication with policy and governance experts, and a general strengthening
of the norms around responsible research. To maximise the chance of success, we
recommend measures to increase transparency, improve guidance, create
incentives to engage earnestly with the process, and facilitate public
deliberation on the requirement's merits and future. Perhaps the most important
contribution from this analysis are the insights we can gain regarding
effective community-based governance and the role and responsibility of the AI
research community more broadly.
Related papers
- Responsible AI in Open Ecosystems: Reconciling Innovation with Risk Assessment and Disclosure [4.578401882034969]
We focus on how model performance evaluation may inform or inhibit probing of model limitations, biases, and other risks.
Our findings can inform AI providers and legal scholars in designing interventions and policies that preserve open-source innovation while incentivizing ethical uptake.
arXiv Detail & Related papers (2024-09-27T19:09:40Z) - Open Problems in Technical AI Governance [93.89102632003996]
Technical AI governance refers to technical analysis and tools for supporting the effective governance of AI.
This paper is intended as a resource for technical researchers or research funders looking to contribute to AI governance.
arXiv Detail & Related papers (2024-07-20T21:13:56Z) - Challenges and Best Practices in Corporate AI Governance:Lessons from the Biopharmaceutical Industry [0.0]
We discuss challenges that any organization attempting to operationalize AI governance will have to face.
These include questions concerning how to define the material scope of AI governance.
We hope to provide project managers, AI practitioners, and data privacy officers responsible for designing and implementing AI governance frameworks with general best practices.
arXiv Detail & Related papers (2024-07-07T12:01:42Z) - ABI Approach: Automatic Bias Identification in Decision-Making Under Risk based in an Ontology of Behavioral Economics [46.57327530703435]
Risk seeking preferences for losses, driven by biases such as loss aversion, pose challenges and can result in severe negative consequences.
This research introduces the ABI approach, a novel solution designed to support organizational decision-makers by automatically identifying and explaining risk seeking preferences.
arXiv Detail & Related papers (2024-05-22T23:53:46Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Responsible AI Governance: A Systematic Literature Review [8.318630741859113]
This paper aims to examine the existing literature on AI Governance.
The focus of this study is to analyse the literature to answer key questions: WHO is accountable for AI systems' governance, WHAT elements are being governed, WHEN governance occurs within the AI development life cycle, and HOW it is executed through various mechanisms like frameworks, tools, standards, policies, or models.
The findings of this study provides a foundational basis for future research and development of comprehensive governance models that align with RAI principles.
arXiv Detail & Related papers (2023-12-18T05:22:36Z) - Investigating Responsible AI for Scientific Research: An Empirical Study [4.597781832707524]
The push for Responsible AI (RAI) in such institutions underscores the increasing emphasis on integrating ethical considerations within AI design and development.
This paper aims to assess the awareness and preparedness regarding the ethical risks inherent in AI design and development.
Our results have revealed certain knowledge gaps concerning ethical, responsible, and inclusive AI, with limitations in awareness of the available AI ethics frameworks.
arXiv Detail & Related papers (2023-12-15T06:40:27Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - Fairness in Recommender Systems: Research Landscape and Future
Directions [119.67643184567623]
We review the concepts and notions of fairness that were put forward in the area in the recent past.
We present an overview of how research in this field is currently operationalized.
Overall, our analysis of recent works points to certain research gaps.
arXiv Detail & Related papers (2022-05-23T08:34:25Z) - Enhanced well-being assessment as basis for the practical implementation
of ethical and rights-based normative principles for AI [0.0]
We propose the practical application of an enhanced well-being impact assessment framework for Autonomous and Intelligent Systems.
This process could enable a human-centered algorithmically-supported approach to the understanding of the impacts of AI systems.
arXiv Detail & Related papers (2020-07-29T13:26:05Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.