Certification Labels for Trustworthy AI: Insights From an Empirical
Mixed-Method Study
- URL: http://arxiv.org/abs/2305.18307v1
- Date: Mon, 15 May 2023 09:51:10 GMT
- Title: Certification Labels for Trustworthy AI: Insights From an Empirical
Mixed-Method Study
- Authors: Nicolas Scharowski, Michaela Benk, Swen J. K\"uhne, L\'eane Wettstein,
Florian Br\"uhlmann
- Abstract summary: This study empirically investigated certification labels as a promising solution.
We demonstrate that labels can significantly increase end-users' trust and willingness to use AI.
However, end-users' preferences for certification labels and their effect on trust and willingness to use AI were more pronounced in high-stake scenarios.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Auditing plays a pivotal role in the development of trustworthy AI. However,
current research primarily focuses on creating auditable AI documentation,
which is intended for regulators and experts rather than end-users affected by
AI decisions. How to communicate to members of the public that an AI has been
audited and considered trustworthy remains an open challenge. This study
empirically investigated certification labels as a promising solution. Through
interviews (N = 12) and a census-representative survey (N = 302), we
investigated end-users' attitudes toward certification labels and their
effectiveness in communicating trustworthiness in low- and high-stakes AI
scenarios. Based on the survey results, we demonstrate that labels can
significantly increase end-users' trust and willingness to use AI in both low-
and high-stakes scenarios. However, end-users' preferences for certification
labels and their effect on trust and willingness to use AI were more pronounced
in high-stake scenarios. Qualitative content analysis of the interviews
revealed opportunities and limitations of certification labels, as well as
facilitators and inhibitors for the effective use of labels in the context of
AI. For example, while certification labels can mitigate data-related concerns
expressed by end-users (e.g., privacy and data protection), other concerns
(e.g., model performance) are more challenging to address. Our study provides
valuable insights and recommendations for designing and implementing
certification labels as a promising constituent within the trustworthy AI
ecosystem.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Ethical AI in Retail: Consumer Privacy and Fairness [0.0]
The adoption of artificial intelligence (AI) in retail has significantly transformed the industry, enabling more personalized services and efficient operations.
However, the rapid implementation of AI technologies raises ethical concerns, particularly regarding consumer privacy and fairness.
This study aims to analyze the ethical challenges of AI applications in retail, explore ways retailers can implement AI technologies ethically while remaining competitive, and provide recommendations on ethical AI practices.
arXiv Detail & Related papers (2024-10-20T12:00:14Z) - Guideline for Trustworthy Artificial Intelligence -- AI Assessment
Catalog [0.0]
It is clear that AI and business models based on it can only reach their full potential if AI applications are developed according to high quality standards.
The issue of the trustworthiness of AI applications is crucial and is the subject of numerous major publications.
This AI assessment catalog addresses exactly this point and is intended for two target groups.
arXiv Detail & Related papers (2023-06-20T08:07:18Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Designing for Responsible Trust in AI Systems: A Communication
Perspective [56.80107647520364]
We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH.
We highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users.
We propose a checklist of requirements to help technology creators identify appropriate cues to use.
arXiv Detail & Related papers (2022-04-29T00:14:33Z) - Metaethical Perspectives on 'Benchmarking' AI Ethics [81.65697003067841]
Benchmarks are seen as the cornerstone for measuring technical progress in Artificial Intelligence (AI) research.
An increasingly prominent research area in AI is ethics, which currently has no set of benchmarks nor commonly accepted way for measuring the 'ethicality' of an AI system.
We argue that it makes more sense to talk about 'values' rather than 'ethics' when considering the possible actions of present and future AI systems.
arXiv Detail & Related papers (2022-04-11T14:36:39Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Who Decides if AI is Fair? The Labels Problem in Algorithmic Auditing [0.0]
We show that fidelity of the ground truth data can lead to spurious differences in performance of ASRs between urban and rural populations.
Our findings highlight how trade-offs between label quality and data annotation costs can complicate algorithmic audits in practice.
arXiv Detail & Related papers (2021-11-16T19:00:03Z) - Multisource AI Scorecard Table for System Evaluation [3.74397577716445]
The paper describes a Multisource AI Scorecard Table (MAST) that provides the developer and user of an artificial intelligence (AI)/machine learning (ML) system with a standard checklist.
The paper explores how the analytic tradecraft standards outlined in Intelligence Community Directive (ICD) 203 can provide a framework for assessing the performance of an AI system.
arXiv Detail & Related papers (2021-02-08T03:37:40Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Estimating the Brittleness of AI: Safety Integrity Levels and the Need
for Testing Out-Of-Distribution Performance [0.0]
Test, Evaluation, Verification, and Validation for Artificial Intelligence (AI) is a challenge that threatens to limit the economic and societal rewards that AI researchers have devoted themselves to producing.
This paper argues that neither of those criteria are certain of Deep Neural Networks.
arXiv Detail & Related papers (2020-09-02T03:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.