Principles to Practices for Responsible AI: Closing the Gap
- URL: http://arxiv.org/abs/2006.04707v1
- Date: Mon, 8 Jun 2020 16:04:44 GMT
- Title: Principles to Practices for Responsible AI: Closing the Gap
- Authors: Daniel Schiff and Bogdana Rakova and Aladdin Ayesh and Anat Fanti and
Michael Lennon
- Abstract summary: We argue that an impact assessment framework is a promising approach to close the principles-to-practices gap.
We review a case study of AI's use in forest ecosystem restoration, demonstrating how an impact assessment framework can translate into effective and responsible AI practices.
- Score: 0.1749935196721634
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Companies have considered adoption of various high-level artificial
intelligence (AI) principles for responsible AI, but there is less clarity on
how to implement these principles as organizational practices. This paper
reviews the principles-to-practices gap. We outline five explanations for this
gap ranging from a disciplinary divide to an overabundance of tools. In turn,
we argue that an impact assessment framework which is broad, operationalizable,
flexible, iterative, guided, and participatory is a promising approach to close
the principles-to-practices gap. Finally, to help practitioners with applying
these recommendations, we review a case study of AI's use in forest ecosystem
restoration, demonstrating how an impact assessment framework can translate
into effective and responsible AI practices.
Related papers
- Crossing the principle-practice gap in AI ethics with ethical problem-solving [0.0]
How to bridge the principle-practice gap separating ethical discourse from the technical side of AI development remains an open problem.
EPS is a methodology promoting responsible, human-centric, and value-oriented AI development.
We utilize EPS as a blueprint to propose the implementation of Ethics as a Service Platform.
arXiv Detail & Related papers (2024-04-16T14:35:13Z) - AI Ethics and Governance in Practice: An Introduction [0.4091406230302996]
AI systems may have transformative and long-term effects on individuals and society.
To manage these impacts responsibly, considerations of AI ethics and governance must be a first priority.
We introduce and describe our PBG Framework, a multi-tiered governance model that enables project teams to integrate ethical values and practical principles into their innovation practices.
arXiv Detail & Related papers (2024-02-19T22:43:19Z) - Understanding What Affects Generalization Gap in Visual Reinforcement
Learning: Theory and Empirical Evidence [58.46374479945535]
This paper theoretically answers the key factors that contribute to the generalization gap when the testing environment has distractors.
Our theories indicate that minimizing the representation distance between training and testing environments, which aligns with human intuition, is the most critical for the benefit of reducing the generalization gap.
arXiv Detail & Related papers (2024-02-05T03:27:52Z) - Resolving Ethics Trade-offs in Implementing Responsible AI [18.894725256708128]
We cover five approaches for addressing the tensions via trade-offs, ranging from rudimentary to complex.
None of the approaches is likely to be appropriate for all organisations, systems, or applications.
We propose a framework which consists of: (i) proactive identification of tensions, (ii) prioritisation and weighting of ethics aspects, (iii) justification and documentation of trade-off decisions.
arXiv Detail & Related papers (2024-01-16T04:14:23Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Responsible AI Considerations in Text Summarization Research: A Review
of Current Practices [89.85174013619883]
We focus on text summarization, a common NLP task largely overlooked by the responsible AI community.
We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020-2022.
We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals.
arXiv Detail & Related papers (2023-11-18T15:35:36Z) - The Participatory Turn in AI Design: Theoretical Foundations and the
Current State of Practice [64.29355073494125]
This article aims to ground what we dub the "participatory turn" in AI design by synthesizing existing theoretical literature on participation.
We articulate empirical findings concerning the current state of participatory practice in AI design based on an analysis of recently published research and semi-structured interviews with 12 AI researchers and practitioners.
arXiv Detail & Related papers (2023-10-02T05:30:42Z) - AI Fairness: from Principles to Practice [0.0]
This paper summarizes and evaluates various approaches, methods, and techniques for pursuing fairness in AI systems.
It proposes practical guidelines for defining, measuring, and preventing bias in AI.
arXiv Detail & Related papers (2022-07-20T11:37:46Z) - Case Study: Deontological Ethics in NLP [119.53038547411062]
We study one ethical theory, namely deontological ethics, from the perspective of NLP.
In particular, we focus on the generalization principle and the respect for autonomy through informed consent.
We provide four case studies to demonstrate how these principles can be used with NLP systems.
arXiv Detail & Related papers (2020-10-09T16:04:51Z) - Enhanced well-being assessment as basis for the practical implementation
of ethical and rights-based normative principles for AI [0.0]
We propose the practical application of an enhanced well-being impact assessment framework for Autonomous and Intelligent Systems.
This process could enable a human-centered algorithmically-supported approach to the understanding of the impacts of AI systems.
arXiv Detail & Related papers (2020-07-29T13:26:05Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.