Catalog of General Ethical Requirements for AI Certification
- URL: http://arxiv.org/abs/2408.12289v1
- Date: Thu, 22 Aug 2024 10:58:41 GMT
- Title: Catalog of General Ethical Requirements for AI Certification
- Authors: Nicholas Kluge Corrêa, Julia Maria Mönig,
- Abstract summary: We present overall ethical requirements and six ethical principles with value-specific recommendations for tools to implement these principles into technology.
Our work is aimed at stakeholders who can take it as a potential blueprint to fulfill minimum ethical requirements for trustworthy AI and AI Certification.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: This whitepaper offers normative and practical guidance for developers of artificial intelligence (AI) systems to achieve "Trustworthy AI". In it, we present overall ethical requirements and six ethical principles with value-specific recommendations for tools to implement these principles into technology. Our value-specific recommendations address the principles of fairness, privacy and data protection, safety and robustness, sustainability, transparency and explainability and truthfulness. For each principle, we also present examples of criteria for risk assessment and categorization of AI systems and applications in line with the categories of the European Union (EU) AI Act. Our work is aimed at stakeholders who can take it as a potential blueprint to fulfill minimum ethical requirements for trustworthy AI and AI Certification.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems [88.80306881112313]
We will introduce and define a family of approaches to AI safety, which we will refer to as guaranteed safe (GS) AI.
The core feature of these approaches is that they aim to produce AI systems which are equipped with high-assurance quantitative safety guarantees.
We outline a number of approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions to them.
arXiv Detail & Related papers (2024-05-10T17:38:32Z) - POLARIS: A framework to guide the development of Trustworthy AI systems [3.02243271391691]
There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals.
We develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice.
Our goal is to empower AI professionals to confidently navigate the ethical dimensions of Trustworthy AI.
arXiv Detail & Related papers (2024-02-08T01:05:16Z) - Towards Responsible AI in Banking: Addressing Bias for Fair
Decision-Making [69.44075077934914]
"Responsible AI" emphasizes the critical nature of addressing biases within the development of a corporate culture.
This thesis is structured around three fundamental pillars: understanding bias, mitigating bias, and accounting for bias.
In line with open-source principles, we have released Bias On Demand and FairView as accessible Python packages.
arXiv Detail & Related papers (2024-01-13T14:07:09Z) - Implementing AI Ethics: Making Sense of the Ethical Requirements [6.244518754129957]
We use Trustworthy Ethics guidelines for Trustworthy AI as our reference for ethical requirements and an Agile portfolio management framework to analyze implementation.
Our findings reveal a general consideration of privacy and data governance ethical requirements as legal requirements with no other consideration for ethical requirements identified.
The findings also show practicable consideration of ethical requirements as technical robustness and safety for implementation as risk requirements and societal and environmental well-being for implementation as sustainability requirements.
arXiv Detail & Related papers (2023-06-11T19:13:36Z) - Towards Implementing Responsible AI [22.514717870367623]
We propose four aspects of AI system design and development, adapting processes used in software engineering.
The salient findings cover four aspects of AI system design and development, adapting processes used in software engineering.
arXiv Detail & Related papers (2022-05-09T14:59:23Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - AI Ethics Principles in Practice: Perspectives of Designers and Developers [19.16435145144916]
We examine the practices and experiences of researchers and engineers from Australia's national scientific research agency (CSIRO)
Interviews were used to examine how the practices of the participants relate to and align with a set of high-level AI ethics principles proposed by the Australian Government.
arXiv Detail & Related papers (2021-12-14T15:28:45Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - A Framework for Ethical AI at the United Nations [0.0]
This paper aims to provide an overview of the ethical concerns in artificial intelligence (AI) and the framework that is needed to mitigate those risks.
It suggests a practical path to ensure the development and use of AI at the United Nations (UN) aligns with our ethical values.
arXiv Detail & Related papers (2021-04-09T23:44:37Z) - On the Morality of Artificial Intelligence [154.69452301122175]
We propose conceptual and practical principles and guidelines for Machine Learning research and deployment.
We insist on concrete actions that can be taken by practitioners to pursue a more ethical and moral practice of ML aimed at using AI for social good.
arXiv Detail & Related papers (2019-12-26T23:06:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.