A Deployment Model to Extend Ethically Aligned AI Implementation Method
ECCOLA
- URL: http://arxiv.org/abs/2110.05933v1
- Date: Tue, 12 Oct 2021 12:22:34 GMT
- Title: A Deployment Model to Extend Ethically Aligned AI Implementation Method
ECCOLA
- Authors: Jani Antikainen, Mamia Agbese, Hanna-Kaisa Alanen, Erika Halme,
Hannakaisa Isom\"aki, Marianna Jantunen, Kai-Kristian Kemell, Rebekah Rousi,
Heidi Vainio-Pekka, Ville Vakkuri
- Abstract summary: This study aims to extend ECCOLA with a deployment model to drive the adoption of ECCOLA.
The model includes simple metrics to facilitate the communication of ethical gaps or outcomes of ethical AI development.
- Score: 5.28595286827031
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a struggle in Artificial intelligence (AI) ethics to gain ground in
actionable methods and models to be utilized by practitioners while developing
and implementing ethically sound AI systems. AI ethics is a vague concept
without a consensus of definition or theoretical grounding and bearing little
connection to practice. Practice involving primarily technical tasks like
software development is not aptly equipped to process and decide upon ethical
considerations. Efforts to create tools and guidelines to help people working
with AI development have been concentrating almost solely on the technical
aspects of AI. A few exceptions do apply, such as the ECCOLA method for
creating ethically aligned AI -systems. ECCOLA has proven results in terms of
increased ethical considerations in AI systems development. Yet, it is a novel
innovation, and room for development still exists. This study aims to extend
ECCOLA with a deployment model to drive the adoption of ECCOLA, as any method,
no matter how good, is of no value without adoption and use. The model includes
simple metrics to facilitate the communication of ethical gaps or outcomes of
ethical AI development. It offers the opportunity to assess any AI system at
any given lifecycle phase, e.g., opening possibilities like analyzing the
ethicality of an AI system under acquisition.
Related papers
- Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - RE-centric Recommendations for the Development of Trustworthy(er)
Autonomous Systems [4.268504966623082]
Complying with the EU AI Act (AIA) guidelines while developing and implementing AI systems will soon be mandatory within the EU.
practitioners lack actionable instructions to operationalise ethics during AI systems development.
A literature review of different ethical guidelines revealed inconsistencies in the principles addressed and the terminology used to describe them.
arXiv Detail & Related papers (2023-05-29T11:57:07Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Governance of Ethical and Trustworthy AI Systems: Research Gaps in the
ECCOLA Method [5.28595286827031]
This research analyzes the ECCOLA method for developing ethical and trustworthy AI systems.
The results demonstrate that while ECCOLA fully facilitates AI governance in corporate governance practices in all its processes, some of its practices do not fully foster data governance and information governance practices.
arXiv Detail & Related papers (2021-11-11T13:54:31Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Ethics as a service: a pragmatic operationalisation of AI Ethics [1.1083289076967895]
gap exists between theory of AI ethics principles and the practical design of AI systems.
This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited.
arXiv Detail & Related papers (2021-02-11T21:29:25Z) - Implementing AI Ethics in Practice: An Empirical Evaluation of the
RESOLVEDD Strategy [6.7298812735467095]
We empirically evaluate an existing method from the field of business ethics, the RESOLVEDD strategy, in the context of ethical system development.
One of our key findings is that, even though the use of the ethical method was forced upon the participants, its utilization nonetheless facilitated of ethical consideration in the projects.
arXiv Detail & Related papers (2020-04-21T17:58:53Z) - ECCOLA -- a Method for Implementing Ethically Aligned AI Systems [11.31664099885664]
We present a method for implementing AI ethics into practice.
The method, ECCOLA, has been iteratively developed using a cyclical action design research approach.
arXiv Detail & Related papers (2020-04-17T17:57:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.