ECCOLA -- a Method for Implementing Ethically Aligned AI Systems
- URL: http://arxiv.org/abs/2004.08377v2
- Date: Mon, 9 Nov 2020 16:09:11 GMT
- Title: ECCOLA -- a Method for Implementing Ethically Aligned AI Systems
- Authors: Ville Vakkuri, Kai-Kristian Kemell, Pekka Abrahamsson
- Abstract summary: We present a method for implementing AI ethics into practice.
The method, ECCOLA, has been iteratively developed using a cyclical action design research approach.
- Score: 11.31664099885664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Various recent Artificial Intelligence (AI) system failures, some of which
have made the global headlines, have highlighted issues in these systems. These
failures have resulted in calls for more ethical AI systems that better take
into account their effects on various stakeholders. However, implementing AI
ethics into practice is still an on-going challenge. High-level guidelines for
doing so exist, devised by governments and private organizations alike, but
lack practicality for developers. To address this issue, in this paper, we
present a method for implementing AI ethics. The method, ECCOLA, has been
iteratively developed using a cyclical action design research approach. The
method aims at making the high-level AI ethics principles more practical,
making it possible for developers to more easily implement them in practice.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - POLARIS: A framework to guide the development of Trustworthy AI systems [3.02243271391691]
There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals.
We develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice.
Our goal is to empower AI professionals to confidently navigate the ethical dimensions of Trustworthy AI.
arXiv Detail & Related papers (2024-02-08T01:05:16Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Managing extreme AI risks amid rapid progress [171.05448842016125]
We describe risks that include large-scale social harms, malicious uses, and irreversible loss of human control over autonomous AI systems.
There is a lack of consensus about how exactly such risks arise, and how to manage them.
Present governance initiatives lack the mechanisms and institutions to prevent misuse and recklessness, and barely address autonomous systems.
arXiv Detail & Related papers (2023-10-26T17:59:06Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Governance of Ethical and Trustworthy AI Systems: Research Gaps in the
ECCOLA Method [5.28595286827031]
This research analyzes the ECCOLA method for developing ethical and trustworthy AI systems.
The results demonstrate that while ECCOLA fully facilitates AI governance in corporate governance practices in all its processes, some of its practices do not fully foster data governance and information governance practices.
arXiv Detail & Related papers (2021-11-11T13:54:31Z) - A Deployment Model to Extend Ethically Aligned AI Implementation Method
ECCOLA [5.28595286827031]
This study aims to extend ECCOLA with a deployment model to drive the adoption of ECCOLA.
The model includes simple metrics to facilitate the communication of ethical gaps or outcomes of ethical AI development.
arXiv Detail & Related papers (2021-10-12T12:22:34Z) - Building Bridges: Generative Artworks to Explore AI Ethics [56.058588908294446]
In recent years, there has been an increased emphasis on understanding and mitigating adverse impacts of artificial intelligence (AI) technologies on society.
A significant challenge in the design of ethical AI systems is that there are multiple stakeholders in the AI pipeline, each with their own set of constraints and interests.
This position paper outlines some potential ways in which generative artworks can play this role by serving as accessible and powerful educational tools.
arXiv Detail & Related papers (2021-06-25T22:31:55Z) - Ethics as a service: a pragmatic operationalisation of AI Ethics [1.1083289076967895]
gap exists between theory of AI ethics principles and the practical design of AI systems.
This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited.
arXiv Detail & Related papers (2021-02-11T21:29:25Z) - Implementing AI Ethics in Practice: An Empirical Evaluation of the
RESOLVEDD Strategy [6.7298812735467095]
We empirically evaluate an existing method from the field of business ethics, the RESOLVEDD strategy, in the context of ethical system development.
One of our key findings is that, even though the use of the ethical method was forced upon the participants, its utilization nonetheless facilitated of ethical consideration in the projects.
arXiv Detail & Related papers (2020-04-21T17:58:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.