How to Assess Trustworthy AI in Practice
- URL: http://arxiv.org/abs/2206.09887v2
- Date: Tue, 28 Jun 2022 14:23:47 GMT
- Title: How to Assess Trustworthy AI in Practice
- Authors: Roberto V. Zicari, Julia Amann, Fr\'ed\'erick Bruneault, Megan Coffee,
Boris D\"udder, Eleanore Hickman, Alessio Gallucci, Thomas Krendl Gilbert,
Thilo Hagendorff, Irmhild van Halem, Elisabeth Hildt, Sune Holm, Georgios
Kararigas, Pedro Kringen, Vince I. Madai, Emilie Wiinblad Mathez, Jesmin
Jahan Tithi, Dennis Vetter, Magnus Westerlund, Renee Wurth
- Abstract summary: Z-Inspection$smallcircledR$ is a holistic process used to evaluate the trustworthiness of AI-based technologies.
Uses the general European Union's High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI.
- Score: 0.22740899647050103
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This report is a methodological reflection on
Z-Inspection$^{\small{\circledR}}$. Z-Inspection$^{\small{\circledR}}$ is a
holistic process used to evaluate the trustworthiness of AI-based technologies
at different stages of the AI lifecycle. It focuses, in particular, on the
identification and discussion of ethical issues and tensions through the
elaboration of socio-technical scenarios. It uses the general European Union's
High-Level Expert Group's (EU HLEG) guidelines for trustworthy AI. This report
illustrates for both AI researchers and AI practitioners how the EU HLEG
guidelines for trustworthy AI can be applied in practice. We share the lessons
learned from conducting a series of independent assessments to evaluate the
trustworthiness of AI systems in healthcare. We also share key recommendations
and practical suggestions on how to ensure a rigorous trustworthy AI assessment
throughout the life-cycle of an AI system.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Ethical AI Governance: Methods for Evaluating Trustworthy AI [0.552480439325792]
Trustworthy Artificial Intelligence (TAI) integrates ethics that align with human values.
TAI evaluation aims to ensure ethical standards and safety in AI development and usage.
arXiv Detail & Related papers (2024-08-28T09:25:50Z) - Responsible Artificial Intelligence: A Structured Literature Review [0.0]
The EU has recently issued several publications emphasizing the necessity of trust in AI.
This highlights the urgent need for international regulation.
This paper introduces a comprehensive and, to our knowledge, the first unified definition of responsible AI.
arXiv Detail & Related papers (2024-03-11T17:01:13Z) - Testing autonomous vehicles and AI: perspectives and challenges from cybersecurity, transparency, robustness and fairness [53.91018508439669]
The study explores the complexities of integrating Artificial Intelligence into Autonomous Vehicles (AVs)
It examines the challenges introduced by AI components and the impact on testing procedures.
The paper identifies significant challenges and suggests future directions for research and development of AI in AV technology.
arXiv Detail & Related papers (2024-02-21T08:29:42Z) - POLARIS: A framework to guide the development of Trustworthy AI systems [3.02243271391691]
There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals.
We develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice.
Our goal is to empower AI professionals to confidently navigate the ethical dimensions of Trustworthy AI.
arXiv Detail & Related papers (2024-02-08T01:05:16Z) - FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare [73.78776682247187]
Concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI.
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
arXiv Detail & Related papers (2023-08-11T10:49:05Z) - RE-centric Recommendations for the Development of Trustworthy(er)
Autonomous Systems [4.268504966623082]
Complying with the EU AI Act (AIA) guidelines while developing and implementing AI systems will soon be mandatory within the EU.
practitioners lack actionable instructions to operationalise ethics during AI systems development.
A literature review of different ethical guidelines revealed inconsistencies in the principles addressed and the terminology used to describe them.
arXiv Detail & Related papers (2023-05-29T11:57:07Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - A Framework for Ethical AI at the United Nations [0.0]
This paper aims to provide an overview of the ethical concerns in artificial intelligence (AI) and the framework that is needed to mitigate those risks.
It suggests a practical path to ensure the development and use of AI at the United Nations (UN) aligns with our ethical values.
arXiv Detail & Related papers (2021-04-09T23:44:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.