Advancing Post Hoc Case Based Explanation with Feature Highlighting
- URL: http://arxiv.org/abs/2311.03246v1
- Date: Mon, 6 Nov 2023 16:34:48 GMT
- Title: Advancing Post Hoc Case Based Explanation with Feature Highlighting
- Authors: Eoin Kenny and Eoin Delaney and Mark Keane
- Abstract summary: We propose two general algorithms which can isolate multiple clear feature parts in a test image, and then connect them to the explanatory cases found in the training data.
Results demonstrate that the proposed approach appropriately calibrates a users feelings of 'correctness' for ambiguous classifications in real world data.
- Score: 0.8287206589886881
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable AI (XAI) has been proposed as a valuable tool to assist in
downstream tasks involving human and AI collaboration. Perhaps the most
psychologically valid XAI techniques are case based approaches which display
'whole' exemplars to explain the predictions of black box AI systems. However,
for such post hoc XAI methods dealing with images, there has been no attempt to
improve their scope by using multiple clear feature 'parts' of the images to
explain the predictions while linking back to relevant cases in the training
data, thus allowing for more comprehensive explanations that are faithful to
the underlying model. Here, we address this gap by proposing two general
algorithms (latent and super pixel based) which can isolate multiple clear
feature parts in a test image, and then connect them to the explanatory cases
found in the training data, before testing their effectiveness in a carefully
designed user study. Results demonstrate that the proposed approach
appropriately calibrates a users feelings of 'correctness' for ambiguous
classifications in real world data on the ImageNet dataset, an effect which
does not happen when just showing the explanation without feature highlighting.
Related papers
- Positive-Unlabelled Learning for Improving Image-based Recommender System Explainability [2.9748898344267785]
This work proposes a new explainer training pipeline by leveraging Positive-Unlabelled (PU) Learning techniques.
Experiments show this PU-based approach outperforms the state-of-the-art non-PU method in six popular real-world datasets.
arXiv Detail & Related papers (2024-07-09T10:40:31Z) - Raising the Bar of AI-generated Image Detection with CLIP [50.345365081177555]
The aim of this work is to explore the potential of pre-trained vision-language models (VLMs) for universal detection of AI-generated images.
We develop a lightweight detection strategy based on CLIP features and study its performance in a wide variety of challenging scenarios.
arXiv Detail & Related papers (2023-11-30T21:11:20Z) - Extending CAM-based XAI methods for Remote Sensing Imagery Segmentation [7.735470452949379]
We introduce a new XAI evaluation methodology and metric based on "Entropy" to measure the model uncertainty.
We show that using Entropy to monitor the model uncertainty in segmenting the pixels within the target class is more suitable.
arXiv Detail & Related papers (2023-10-03T07:01:23Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - An Efficient Ensemble Explainable AI (XAI) Approach for Morphed Face
Detection [1.2599533416395763]
We present a novel visual explanation approach named Ensemble XAI to provide a more comprehensive visual explanation for a deep learning prognostic model (EfficientNet-Grad1)
The experiments have been performed on three publicly available datasets namely Face Research Lab London Set, Wide Multi-Channel Presentation Attack (WMCA) and Makeup Induced Face Spoofing (MIFS)
arXiv Detail & Related papers (2023-04-23T13:43:06Z) - Foiling Explanations in Deep Neural Networks [0.0]
This paper uncovers a troubling property of explanation methods for image-based DNNs.
We demonstrate how explanations may be arbitrarily manipulated through the use of evolution strategies.
Our novel algorithm is successfully able to manipulate an image in a manner imperceptible to the human eye.
arXiv Detail & Related papers (2022-11-27T15:29:39Z) - Exploring CLIP for Assessing the Look and Feel of Images [87.97623543523858]
We introduce Contrastive Language-Image Pre-training (CLIP) models for assessing both the quality perception (look) and abstract perception (feel) of images in a zero-shot manner.
Our results show that CLIP captures meaningful priors that generalize well to different perceptual assessments.
arXiv Detail & Related papers (2022-07-25T17:58:16Z) - Unpaired Image Captioning by Image-level Weakly-Supervised Visual
Concept Recognition [83.93422034664184]
Unpaired image captioning (UIC) is to describe images without using image-caption pairs in the training phase.
Most existing studies use off-the-shelf algorithms to obtain the visual concepts.
We propose a novel approach to achieve cost-effective UIC using image-level labels.
arXiv Detail & Related papers (2022-03-07T08:02:23Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.