Governance of Ethical and Trustworthy AI Systems: Research Gaps in the
ECCOLA Method
- URL: http://arxiv.org/abs/2111.06207v1
- Date: Thu, 11 Nov 2021 13:54:31 GMT
- Title: Governance of Ethical and Trustworthy AI Systems: Research Gaps in the
ECCOLA Method
- Authors: Mamia Agbese, Hanna-Kaisa Alanen, Jani Antikainen, Erika Halme,
Hannakaisa Isom\"aki, Marianna Jantunen, Kai-Kristian Kemell, Rebekah Rousi,
Heidi Vainio-Pekka and Ville Vakkuri
- Abstract summary: This research analyzes the ECCOLA method for developing ethical and trustworthy AI systems.
The results demonstrate that while ECCOLA fully facilitates AI governance in corporate governance practices in all its processes, some of its practices do not fully foster data governance and information governance practices.
- Score: 5.28595286827031
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Advances in machine learning (ML) technologies have greatly improved
Artificial Intelligence (AI) systems. As a result, AI systems have become
ubiquitous, with their application prevalent in virtually all sectors. However,
AI systems have prompted ethical concerns, especially as their usage crosses
boundaries in sensitive areas such as healthcare, transportation, and security.
As a result, users are calling for better AI governance practices in ethical AI
systems. Therefore, AI development methods are encouraged to foster these
practices. This research analyzes the ECCOLA method for developing ethical and
trustworthy AI systems to determine if it enables AI governance in development
processes through ethical practices. The results demonstrate that while ECCOLA
fully facilitates AI governance in corporate governance practices in all its
processes, some of its practices do not fully foster data governance and
information governance practices. This indicates that the method can be further
improved.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - POLARIS: A framework to guide the development of Trustworthy AI systems [3.02243271391691]
There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals.
We develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice.
Our goal is to empower AI professionals to confidently navigate the ethical dimensions of Trustworthy AI.
arXiv Detail & Related papers (2024-02-08T01:05:16Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - Putting AI Ethics into Practice: The Hourglass Model of Organizational
AI Governance [0.0]
We present an AI governance framework, which targets organizations that develop and use AI systems.
The framework is designed to help organizations deploying AI systems translate ethical AI principles into practice.
arXiv Detail & Related papers (2022-06-01T08:55:27Z) - A Deployment Model to Extend Ethically Aligned AI Implementation Method
ECCOLA [5.28595286827031]
This study aims to extend ECCOLA with a deployment model to drive the adoption of ECCOLA.
The model includes simple metrics to facilitate the communication of ethical gaps or outcomes of ethical AI development.
arXiv Detail & Related papers (2021-10-12T12:22:34Z) - Trustworthy AI: From Principles to Practices [44.67324097900778]
Many current AI systems were found vulnerable to imperceptible attacks, biased against underrepresented groups, lacking in user privacy protection, etc.
In this review, we strive to provide AI practitioners a comprehensive guide towards building trustworthy AI systems.
To unify the current fragmented approaches towards trustworthy AI, we propose a systematic approach that considers the entire lifecycle of AI systems.
arXiv Detail & Related papers (2021-10-04T03:20:39Z) - AI Governance for Businesses [2.072259480917207]
It aims at leveraging AI through effective use of data and minimization of AI-related cost and risk.
This work views AI products as systems, where key functionality is delivered by machine learning (ML) models leveraging (training) data.
Our framework decomposes AI governance into governance of data, (ML) models and (AI) systems along four dimensions.
arXiv Detail & Related papers (2020-11-20T22:31:37Z) - ECCOLA -- a Method for Implementing Ethically Aligned AI Systems [11.31664099885664]
We present a method for implementing AI ethics into practice.
The method, ECCOLA, has been iteratively developed using a cyclical action design research approach.
arXiv Detail & Related papers (2020-04-17T17:57:07Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.