Towards An Ethics-Audit Bot
- URL: http://arxiv.org/abs/2103.15746v1
- Date: Mon, 29 Mar 2021 16:33:22 GMT
- Title: Towards An Ethics-Audit Bot
- Authors: Siani Pearson and Martin Lloyd and Vivek Nallur
- Abstract summary: We propose a system that is able to conduct an ethical audit of a target system, given certain socio-technical conditions.
To be more specific, we propose the creation of a bot that is able to support organisations in ensuring that their software development lifecycles contain processes that meet certain ethical standards.
- Score: 0.6445605125467572
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper we focus on artificial intelligence (AI) for governance, not
governance for AI, and on just one aspect of governance, namely ethics audit.
Different kinds of ethical audit bots are possible, but who makes the choices
and what are the implications? In this paper, we do not provide
ethical/philosophical solutions, but rather focus on the technical aspects of
what an AI-based solution for validating the ethical soundness of a target
system would be like. We propose a system that is able to conduct an ethical
audit of a target system, given certain socio-technical conditions. To be more
specific, we propose the creation of a bot that is able to support
organisations in ensuring that their software development lifecycles contain
processes that meet certain ethical standards.
Related papers
- Using AI Alignment Theory to understand the potential pitfalls of regulatory frameworks [55.2480439325792]
This paper critically examines the European Union's Artificial Intelligence Act (EU AI Act)
Uses insights from Alignment Theory (AT) research, which focuses on the potential pitfalls of technical alignment in Artificial Intelligence.
As we apply these concepts to the EU AI Act, we uncover potential vulnerabilities and areas for improvement in the regulation.
arXiv Detail & Related papers (2024-10-10T17:38:38Z) - On the Efficiency of Ethics as a Governing Tool for Artificial
Intelligence [0.0]
Artificial Intelligence Ethics and Safety is an emerging research field that has been gaining popularity in recent years.
Several private, public and non-governmental organizations have published guidelines proposing ethical principles for regulating the use and development of autonomous intelligent systems.
We would like to conduct a critical analysis of the current state of AI Ethics and suggest that this form of governance is not sufficient to norm the AI industry and its developers.
arXiv Detail & Related papers (2022-10-27T09:46:33Z) - Fairness in Agreement With European Values: An Interdisciplinary
Perspective on AI Regulation [61.77881142275982]
This interdisciplinary position paper considers various concerns surrounding fairness and discrimination in AI, and discusses how AI regulations address them.
We first look at AI and fairness through the lenses of law, (AI) industry, sociotechnology, and (moral) philosophy, and present various perspectives.
We identify and propose the roles AI Regulation should take to make the endeavor of the AI Act a success in terms of AI fairness concerns.
arXiv Detail & Related papers (2022-06-08T12:32:08Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - A Deployment Model to Extend Ethically Aligned AI Implementation Method
ECCOLA [5.28595286827031]
This study aims to extend ECCOLA with a deployment model to drive the adoption of ECCOLA.
The model includes simple metrics to facilitate the communication of ethical gaps or outcomes of ethical AI development.
arXiv Detail & Related papers (2021-10-12T12:22:34Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Ethics as a service: a pragmatic operationalisation of AI Ethics [1.1083289076967895]
gap exists between theory of AI ethics principles and the practical design of AI systems.
This is the question we seek to address here by exploring why principles and technical translational tools are still needed even if they are limited.
arXiv Detail & Related papers (2021-02-11T21:29:25Z) - AI virtues -- The missing link in putting AI ethics into practice [0.0]
The paper defines four basic AI virtues, namely justice, honesty, responsibility and care.
It defines two second-order AI virtues, prudence and fortitude, that bolster achieving the basic virtues.
arXiv Detail & Related papers (2020-11-25T14:14:47Z) - ECCOLA -- a Method for Implementing Ethically Aligned AI Systems [11.31664099885664]
We present a method for implementing AI ethics into practice.
The method, ECCOLA, has been iteratively developed using a cyclical action design research approach.
arXiv Detail & Related papers (2020-04-17T17:57:07Z) - Hacia los Comit\'es de \'Etica en Inteligencia Artificial [68.8204255655161]
It is priority to create the rules and specialized organizations that can oversight the following of such rules.
This work proposes the creation, at the universities, of Ethical Committees or Commissions specialized on Artificial Intelligence.
arXiv Detail & Related papers (2020-02-11T23:48:31Z) - Learning from Learning Machines: Optimisation, Rules, and Social Norms [91.3755431537592]
It appears that the area of AI that is most analogous to the behaviour of economic entities is that of morally good decision-making.
Recent successes of deep learning for AI suggest that more implicit specifications work better than explicit ones for solving such problems.
arXiv Detail & Related papers (2019-12-29T17:42:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.