Explainability Case Studies
- URL: http://arxiv.org/abs/2009.00246v2
- Date: Sat, 3 Oct 2020 03:46:48 GMT
- Title: Explainability Case Studies
- Authors: Ben Zevenbergen and Allison Woodruff and Patrick Gage Kelley
- Abstract summary: Explainability is one of the key ethical concepts in the design of AI systems.
We present a set of case studies of a hypothetical AI-enabled product, which serves as a pedagogical tool to empower product designers, developers, students, and educators to develop a holistic explainability strategy for their own products.
- Score: 2.2872132127037963
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainability is one of the key ethical concepts in the design of AI
systems. However, attempts to operationalize this concept thus far have tended
to focus on approaches such as new software for model interpretability or
guidelines with checklists. Rarely do existing tools and guidance incentivize
the designers of AI systems to think critically and strategically about the
role of explanations in their systems. We present a set of case studies of a
hypothetical AI-enabled product, which serves as a pedagogical tool to empower
product designers, developers, students, and educators to develop a holistic
explainability strategy for their own products.
Related papers
- Data Analysis in the Era of Generative AI [56.44807642944589]
This paper explores the potential of AI-powered tools to reshape data analysis, focusing on design considerations and challenges.
We explore how the emergence of large language and multimodal models offers new opportunities to enhance various stages of data analysis workflow.
We then examine human-centered design principles that facilitate intuitive interactions, build user trust, and streamline the AI-assisted analysis workflow across multiple apps.
arXiv Detail & Related papers (2024-09-27T06:31:03Z) - Enhancing LLM-Based Feedback: Insights from Intelligent Tutoring Systems and the Learning Sciences [0.0]
This work advocates careful and caring AIED research by going through previous research on feedback generation in ITS.
The main contributions of this paper include: an avocation of applying more cautious, theoretically grounded methods in feedback generation in the era of generative AI.
arXiv Detail & Related papers (2024-05-07T20:09:18Z) - Opening the Black-Box: A Systematic Review on Explainable AI in Remote Sensing [51.524108608250074]
Black-box machine learning approaches have become a dominant modeling paradigm for knowledge extraction in remote sensing.
We perform a systematic review to identify the key trends in the field and shed light on novel explainable AI approaches.
We also give a detailed outlook on the challenges and promising research directions.
arXiv Detail & Related papers (2024-02-21T13:19:58Z) - Designing explainable artificial intelligence with active inference: A
framework for transparent introspection and decision-making [0.0]
We discuss how active inference can be leveraged to design explainable AI systems.
We propose an architecture for explainable AI systems using active inference.
arXiv Detail & Related papers (2023-06-06T21:38:09Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Problem examination for AI methods in product design [4.020523898765404]
This paper first clarifies important terms and concepts for the interdisciplinary domain of AI methods in product design.
A key contribution is a new classification of design problems using the four characteristics decomposability, inter-dependencies, innovation and creativity.
Early mappings of these concepts to AI solutions are sketched and verified using design examples.
arXiv Detail & Related papers (2022-01-19T15:19:29Z) - A Critical Review of Inductive Logic Programming Techniques for
Explainable AI [9.028858411921906]
Inductive Logic Programming (ILP) is a subfield of symbolic artificial intelligence.
ILP generates explainable first-order clausal theories from examples and background knowledge.
Existing ILP systems often have a vast solution space, and the induced solutions are very sensitive to noises and disturbances.
arXiv Detail & Related papers (2021-12-31T06:34:32Z) - AI Explainability 360: Impact and Design [120.95633114160688]
In 2019, we created AI Explainability 360 (Arya et al. 2020), an open source software toolkit featuring ten diverse and state-of-the-art explainability methods.
This paper examines the impact of the toolkit with several case studies, statistics, and community feedback.
The paper also describes the flexible design of the toolkit, examples of its use, and the significant educational material and documentation available to its users.
arXiv Detail & Related papers (2021-09-24T19:17:09Z) - An interdisciplinary conceptual study of Artificial Intelligence (AI)
for helping benefit-risk assessment practices: Towards a comprehensive
qualification matrix of AI programs and devices (pre-print 2020) [55.41644538483948]
This paper proposes a comprehensive analysis of existing concepts coming from different disciplines tackling the notion of intelligence.
The aim is to identify shared notions or discrepancies to consider for qualifying AI systems.
arXiv Detail & Related papers (2021-05-07T12:01:31Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.