Guideline for Trustworthy Artificial Intelligence -- AI Assessment
Catalog
- URL: http://arxiv.org/abs/2307.03681v1
- Date: Tue, 20 Jun 2023 08:07:18 GMT
- Title: Guideline for Trustworthy Artificial Intelligence -- AI Assessment
Catalog
- Authors: Maximilian Poretschkin (1 and 2), Anna Schmitz (1), Maram Akila (1),
Linara Adilova (1), Daniel Becker (1), Armin B. Cremers (2), Dirk Hecker (1),
Sebastian Houben (1), Michael Mock (1 and 2), Julia Rosenzweig (1), Joachim
Sicking (1), Elena Schulz (1), Angelika Voss (1), Stefan Wrobel (1 and 2)
((1) Fraunhofer Institute for Intelligent Analysis and Information Systems
IAIS, Sankt Augustin, Germany, (2) Department of Computer Science, University
of Bonn, Bonn, Germany)
- Abstract summary: It is clear that AI and business models based on it can only reach their full potential if AI applications are developed according to high quality standards.
The issue of the trustworthiness of AI applications is crucial and is the subject of numerous major publications.
This AI assessment catalog addresses exactly this point and is intended for two target groups.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial Intelligence (AI) has made impressive progress in recent years and
represents a key technology that has a crucial impact on the economy and
society. However, it is clear that AI and business models based on it can only
reach their full potential if AI applications are developed according to high
quality standards and are effectively protected against new AI risks. For
instance, AI bears the risk of unfair treatment of individuals when processing
personal data e.g., to support credit lending or staff recruitment decisions.
The emergence of these new risks is closely linked to the fact that the
behavior of AI applications, particularly those based on Machine Learning (ML),
is essentially learned from large volumes of data and is not predetermined by
fixed programmed rules.
Thus, the issue of the trustworthiness of AI applications is crucial and is
the subject of numerous major publications by stakeholders in politics,
business and society. In addition, there is mutual agreement that the
requirements for trustworthy AI, which are often described in an abstract way,
must now be made clear and tangible. One challenge to overcome here relates to
the fact that the specific quality criteria for an AI application depend
heavily on the application context and possible measures to fulfill them in
turn depend heavily on the AI technology used. Lastly, practical assessment
procedures are needed to evaluate whether specific AI applications have been
developed according to adequate quality standards. This AI assessment catalog
addresses exactly this point and is intended for two target groups: Firstly, it
provides developers with a guideline for systematically making their AI
applications trustworthy. Secondly, it guides assessors and auditors on how to
examine AI applications for trustworthiness in a structured way.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems [2.444630714797783]
We review and discuss the intricacies of AI biases, definitions, methods of detection and mitigation, and metrics for evaluating bias.
We also discuss open challenges with regard to the trustworthiness and widespread application of AI across diverse domains of human-centric decision making.
arXiv Detail & Related papers (2024-08-28T06:04:25Z) - Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits [54.648819983899614]
General purpose AI seems to have lowered the barriers for the public to use AI and harness its power.
We introduce PARTICIP-AI, a framework for laypeople to speculate and assess AI use cases and their impacts.
arXiv Detail & Related papers (2024-03-21T19:12:37Z) - POLARIS: A framework to guide the development of Trustworthy AI systems [3.02243271391691]
There is a significant gap between high-level AI ethics principles and low-level concrete practices for AI professionals.
We develop a novel holistic framework for Trustworthy AI - designed to bridge the gap between theory and practice.
Our goal is to empower AI professionals to confidently navigate the ethical dimensions of Trustworthy AI.
arXiv Detail & Related papers (2024-02-08T01:05:16Z) - Human-Centric Multimodal Machine Learning: Recent Advances and Testbed
on AI-based Recruitment [66.91538273487379]
There is a certain consensus about the need to develop AI applications with a Human-Centric approach.
Human-Centric Machine Learning needs to be developed based on four main requirements: (i) utility and social good; (ii) privacy and data ownership; (iii) transparency and accountability; and (iv) fairness in AI-driven decision-making processes.
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
arXiv Detail & Related papers (2023-02-13T16:44:44Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Opening the Software Engineering Toolbox for the Assessment of
Trustworthy AI [17.910325223647362]
We argue for the application of software engineering and testing practices for the assessment of trustworthy AI.
We make the connection between the seven key requirements as defined by the European Commission's AI high-level expert group.
arXiv Detail & Related papers (2020-07-14T08:16:15Z) - Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable
Claims [59.64274607533249]
AI developers need to make verifiable claims to which they can be held accountable.
This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems.
We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms.
arXiv Detail & Related papers (2020-04-15T17:15:35Z) - AAAI FSS-19: Human-Centered AI: Trustworthiness of AI Models and Data
Proceedings [8.445274192818825]
It is crucial for predictive models to be uncertainty-aware and yield trustworthy predictions.
The focus of this symposium was on AI systems to improve data quality and technical robustness and safety.
submissions from broadly defined areas also discussed approaches addressing requirements such as explainable models, human trust and ethical aspects of AI.
arXiv Detail & Related papers (2020-01-15T15:30:29Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.