PADTHAI-MM: A Principled Approach for Designing Trustable,
Human-centered AI systems using the MAST Methodology
- URL: http://arxiv.org/abs/2401.13850v1
- Date: Wed, 24 Jan 2024 23:15:44 GMT
- Title: PADTHAI-MM: A Principled Approach for Designing Trustable,
Human-centered AI systems using the MAST Methodology
- Authors: Nayoung Kim, Myke C. Cohen, Yang Ba, Anna Pan, Shawaiz Bhatti, Pouria
Salehi, James Sung, Erik Blasch, Michelle V. Mancenido, Erin K. Chiou
- Abstract summary: The Multisource AI Scorecard Table (MAST), a checklist rating system, addresses this gap in designing and evaluating AI-enabled decision support systems.
We propose the Principled Approach for Designing Trustable Human-centered AI systems using MAST methodology.
We show that MAST-guided design can improve trust perceptions, and that MAST criteria can be linked to performance, process, and purpose information.
- Score: 5.38932801848643
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Designing for AI trustworthiness is challenging, with a lack of practical
guidance despite extensive literature on trust. The Multisource AI Scorecard
Table (MAST), a checklist rating system, addresses this gap in designing and
evaluating AI-enabled decision support systems. We propose the Principled
Approach for Designing Trustable Human-centered AI systems using MAST
Methodology (PADTHAI-MM), a nine-step framework what we demonstrate through the
iterative design of a text analysis platform called the REporting Assistant for
Defense and Intelligence Tasks (READIT). We designed two versions of READIT,
high-MAST including AI context and explanations, and low-MAST resembling a
"black box" type system. Participant feedback and state-of-the-art AI knowledge
was integrated in the design process, leading to a redesigned prototype tested
by participants in an intelligence reporting task. Results show that
MAST-guided design can improve trust perceptions, and that MAST criteria can be
linked to performance, process, and purpose information, providing a practical
and theory-informed basis for AI system design.
Related papers
- Engineering Trustworthy AI: A Developer Guide for Empirical Risk Minimization [53.80919781981027]
Key requirements for trustworthy AI can be translated into design choices for the components of empirical risk minimization.
We hope to provide actionable guidance for building AI systems that meet emerging standards for trustworthiness of AI.
arXiv Detail & Related papers (2024-10-25T07:53:32Z) - Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - Combining AI Control Systems and Human Decision Support via Robustness and Criticality [53.10194953873209]
We extend a methodology for adversarial explanations (AE) to state-of-the-art reinforcement learning frameworks.
We show that the learned AI control system demonstrates robustness against adversarial tampering.
In a training / learning framework, this technology can improve both the AI's decisions and explanations through human interaction.
arXiv Detail & Related papers (2024-07-03T15:38:57Z) - The AI-DEC: A Card-based Design Method for User-centered AI Explanations [20.658833770179903]
We develop a design method, called AI-DEC, that defines four dimensions of AI explanations.
We evaluate this method through co-design sessions with workers in healthcare, finance, and management industries.
We discuss the implications of using the AI-DEC for the user-centered design of AI explanations in real-world systems.
arXiv Detail & Related papers (2024-05-26T22:18:38Z) - Explainable AI for Safe and Trustworthy Autonomous Driving: A Systematic Review [12.38351931894004]
We present the first systematic literature review of explainable methods for safe and trustworthy autonomous driving.
We identify five key contributions of XAI for safe and trustworthy AI in AD, which are interpretable design, interpretable surrogate models, interpretable monitoring, auxiliary explanations, and interpretable validation.
We propose a modular framework called SafeX to integrate these contributions, enabling explanation delivery to users while simultaneously ensuring the safety of AI models.
arXiv Detail & Related papers (2024-02-08T09:08:44Z) - Evaluating Trustworthiness of AI-Enabled Decision Support Systems:
Validation of the Multisource AI Scorecard Table (MAST) [10.983659980278926]
The Multisource AI Scorecard Table (MAST) is a checklist tool to inform the design and evaluation of trustworthy AI systems.
We evaluate whether MAST is associated with people's trust perceptions in AI-enabled decision support systems.
arXiv Detail & Related papers (2023-11-29T19:34:15Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Certifiable Artificial Intelligence Through Data Fusion [7.103626867766158]
This paper reviews and proposes concerns in adopting, fielding, and maintaining artificial intelligence (AI) systems.
A notional use case is presented with image data fusion to support AI object recognition certifiability considering precision versus distance.
arXiv Detail & Related papers (2021-11-03T03:34:19Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - Multisource AI Scorecard Table for System Evaluation [3.74397577716445]
The paper describes a Multisource AI Scorecard Table (MAST) that provides the developer and user of an artificial intelligence (AI)/machine learning (ML) system with a standard checklist.
The paper explores how the analytic tradecraft standards outlined in Intelligence Community Directive (ICD) 203 can provide a framework for assessing the performance of an AI system.
arXiv Detail & Related papers (2021-02-08T03:37:40Z) - Towards an Interface Description Template for AI-enabled Systems [77.34726150561087]
Reuse is a common system architecture approach that seeks to instantiate a system architecture with existing components.
There is currently no framework that guides the selection of necessary information to assess their portability to operate in a system different than the one for which the component was originally purposed.
We present ongoing work on establishing an interface description template that captures the main information of an AI-enabled component.
arXiv Detail & Related papers (2020-07-13T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.