I-CEE: Tailoring Explanations of Image Classification Models to User
Expertise
- URL: http://arxiv.org/abs/2312.12102v2
- Date: Wed, 10 Jan 2024 15:22:23 GMT
- Title: I-CEE: Tailoring Explanations of Image Classification Models to User
Expertise
- Authors: Yao Rong, Peizhu Qian, Vaibhav Unhelkar, Enkelejda Kasneci
- Abstract summary: We present I-CEE, a framework that provides Image Classification Explanations tailored to User Expertise.
I-CEE models the informativeness of the example images to depend on user expertise, resulting in different examples for different users.
Experiments with simulated users show that I-CEE improves users' ability to accurately predict the model's decisions.
- Score: 13.293968260458962
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effectively explaining decisions of black-box machine learning models is
critical to responsible deployment of AI systems that rely on them. Recognizing
their importance, the field of explainable AI (XAI) provides several techniques
to generate these explanations. Yet, there is relatively little emphasis on the
user (the explainee) in this growing body of work and most XAI techniques
generate "one-size-fits-all" explanations. To bridge this gap and achieve a
step closer towards human-centered XAI, we present I-CEE, a framework that
provides Image Classification Explanations tailored to User Expertise. Informed
by existing work, I-CEE explains the decisions of image classification models
by providing the user with an informative subset of training data (i.e.,
example images), corresponding local explanations, and model decisions.
However, unlike prior work, I-CEE models the informativeness of the example
images to depend on user expertise, resulting in different examples for
different users. We posit that by tailoring the example set to user expertise,
I-CEE can better facilitate users' understanding and simulatability of the
model. To evaluate our approach, we conduct detailed experiments in both
simulation and with human participants (N = 100) on multiple datasets.
Experiments with simulated users show that I-CEE improves users' ability to
accurately predict the model's decisions (simulatability) compared to
baselines, providing promising preliminary results. Experiments with human
participants demonstrate that our method significantly improves user
simulatability accuracy, highlighting the importance of human-centered XAI
Related papers
- Interaction as Explanation: A User Interaction-based Method for Explaining Image Classification Models [1.3597551064547502]
In computer vision, explainable AI (xAI) methods seek to mitigate the 'black-box' problem.
Traditional xAI methods concentrate on visualizing input features that influence model predictions.
We present an interaction-based xAI method that enhances user comprehension of image classification models through their interaction.
arXiv Detail & Related papers (2024-04-15T14:26:00Z) - Explain To Decide: A Human-Centric Review on the Role of Explainable
Artificial Intelligence in AI-assisted Decision Making [1.0878040851638]
Machine learning models are error-prone and cannot be used autonomously.
Explainable Artificial Intelligence (XAI) aids end-user understanding of the model.
This paper surveyed the recent empirical studies on XAI's impact on human-AI decision-making.
arXiv Detail & Related papers (2023-12-11T22:35:21Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Learning Transferable Pedestrian Representation from Multimodal
Information Supervision [174.5150760804929]
VAL-PAT is a novel framework that learns transferable representations to enhance various pedestrian analysis tasks with multimodal information.
We first perform pre-training on LUPerson-TA dataset, where each image contains text and attribute annotations.
We then transfer the learned representations to various downstream tasks, including person reID, person attribute recognition and text-based person search.
arXiv Detail & Related papers (2023-04-12T01:20:58Z) - Let's Go to the Alien Zoo: Introducing an Experimental Framework to
Study Usability of Counterfactual Explanations for Machine Learning [6.883906273999368]
Counterfactual explanations (CFEs) have gained traction as a psychologically grounded approach to generate post-hoc explanations.
We introduce the Alien Zoo, an engaging, web-based and game-inspired experimental framework.
As a proof of concept, we demonstrate the practical efficacy and feasibility of this approach in a user study.
arXiv Detail & Related papers (2022-05-06T17:57:05Z) - Beyond Explaining: Opportunities and Challenges of XAI-Based Model
Improvement [75.00655434905417]
Explainable Artificial Intelligence (XAI) is an emerging research field bringing transparency to highly complex machine learning (ML) models.
This paper offers a comprehensive overview over techniques that apply XAI practically for improving various properties of ML models.
We show empirically through experiments on toy and realistic settings how explanations can help improve properties such as model generalization ability or reasoning.
arXiv Detail & Related papers (2022-03-15T15:44:28Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - DECE: Decision Explorer with Counterfactual Explanations for Machine
Learning Models [36.50754934147469]
We exploit the potential of counterfactual explanations to understand and explore the behavior of machine learning models.
We design DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets.
arXiv Detail & Related papers (2020-08-19T09:44:47Z) - A Competence-aware Curriculum for Visual Concepts Learning via Question
Answering [95.35905804211698]
We propose a competence-aware curriculum for visual concept learning in a question-answering manner.
We design a neural-symbolic concept learner for learning the visual concepts and a multi-dimensional Item Response Theory (mIRT) model for guiding the learning process.
Experimental results on CLEVR show that with a competence-aware curriculum, the proposed method achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-07-03T05:08:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.