Explainability via Interactivity? Supporting Nonexperts' Sensemaking of
Pretrained CNN by Interacting with Their Daily Surroundings
- URL: http://arxiv.org/abs/2107.01996v1
- Date: Mon, 31 May 2021 19:22:53 GMT
- Title: Explainability via Interactivity? Supporting Nonexperts' Sensemaking of
Pretrained CNN by Interacting with Their Daily Surroundings
- Authors: Chao Wang, Pengcheng An
- Abstract summary: We present a mobile application to support nonexperts to interactively make sense of Convolutional Neural Networks (CNN)
It allows users to play with a pretrained CNN by taking pictures of their surrounding objects.
We use an up-to-date XAI technique (Class Activation Map) to intuitively visualize the model's decision.
- Score: 7.455054065013047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current research on Explainable AI (XAI) heavily targets on expert users
(data scientists or AI developers). However, increasing importance has been
argued for making AI more understandable to nonexperts, who are expected to
leverage AI techniques, but have limited knowledge about AI. We present a
mobile application to support nonexperts to interactively make sense of
Convolutional Neural Networks (CNN); it allows users to play with a pretrained
CNN by taking pictures of their surrounding objects. We use an up-to-date XAI
technique (Class Activation Map) to intuitively visualize the model's decision
(the most important image regions that lead to a certain result). Deployed in a
university course, this playful learning tool was found to support design
students to gain vivid understandings about the capabilities and limitations of
pretrained CNNs in real-world environments. Concrete examples of students'
playful explorations are reported to characterize their sensemaking processes
reflecting different depths of thought.
Related papers
- AI Readiness in Healthcare through Storytelling XAI [0.5120567378386615]
We develop an approach that combines multi-task distillation with interpretability techniques to enable audience-centric explainability.
Our methods increase the trust of both the domain experts and the machine learning experts to enable a responsible AI.
arXiv Detail & Related papers (2024-10-24T13:30:18Z) - Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies [2.715884199292287]
Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users.
arXiv Detail & Related papers (2023-01-13T01:08:49Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Toward Transparent AI: A Survey on Interpreting the Inner Structures of
Deep Neural Networks [8.445831718854153]
We review over 300 works with a focus on inner interpretability tools.
We introduce a taxonomy that classifies methods by what part of the network they help to explain.
We argue that the status quo in interpretability research is largely unproductive.
arXiv Detail & Related papers (2022-07-27T01:59:13Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Empowering Things with Intelligence: A Survey of the Progress,
Challenges, and Opportunities in Artificial Intelligence of Things [98.10037444792444]
We show how AI can empower the IoT to make it faster, smarter, greener, and safer.
First, we present progress in AI research for IoT from four perspectives: perceiving, learning, reasoning, and behaving.
Finally, we summarize some promising applications of AIoT that are likely to profoundly reshape our world.
arXiv Detail & Related papers (2020-11-17T13:14:28Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - CNN Explainer: Learning Convolutional Neural Networks with Interactive
Visualization [23.369550871258543]
We present CNN Explainer, an interactive visualization tool designed for non-experts to learn and examine convolutional neural networks (CNNs)
Our tool addresses key challenges that novices face while learning about CNNs, which we identify from interviews with instructors and a survey with past students.
CNN Explainer helps users more easily understand the inner workings of CNNs, and is engaging and enjoyable to use.
arXiv Detail & Related papers (2020-04-30T17:49:44Z) - Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike
Common Sense [142.53911271465344]
We argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense.
arXiv Detail & Related papers (2020-04-20T04:07:28Z) - Explainable Active Learning (XAL): An Empirical Study of How Local
Explanations Impact Annotator Experience [76.9910678786031]
We propose a novel paradigm of explainable active learning (XAL), by introducing techniques from the recently surging field of explainable AI (XAI) into an Active Learning setting.
Our study shows benefits of AI explanation as interfaces for machine teaching--supporting trust calibration and enabling rich forms of teaching feedback, and potential drawbacks--anchoring effect with the model judgment and cognitive workload.
arXiv Detail & Related papers (2020-01-24T22:52:18Z) - Questioning the AI: Informing Design Practices for Explainable AI User
Experiences [33.81809180549226]
A surge of interest in explainable AI (XAI) has led to a vast collection of algorithmic work on the topic.
We seek to identify gaps between the current XAI algorithmic work and practices to create explainable AI products.
We develop an algorithm-informed XAI question bank in which user needs for explainability are represented.
arXiv Detail & Related papers (2020-01-08T12:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.