Explaining AI-based Decision Support Systems using Concept Localization
Maps
- URL: http://arxiv.org/abs/2005.01399v1
- Date: Mon, 4 May 2020 11:33:00 GMT
- Title: Explaining AI-based Decision Support Systems using Concept Localization
Maps
- Authors: Adriano Lucieri, Muhammad Naseer Bajwa, Andreas Dengel and Sheraz
Ahmed
- Abstract summary: Concept Localization Maps (CLMs) is a novel approach towards explainable image classifiers employed as Decision Support Systems (DSS)
CLMs extend Concept Activation Vectors (CAVs) by locating significant regions corresponding to a learned concept in the latent space of a trained image classifier.
We generated a new synthetic dataset called Simple Concept DataBase (SCDB) that includes annotations for 10 distinguishable concepts, and made it publicly available.
We achieved localization recall of above 80% for most relevant concepts and average recall above 60% for all concepts using SE-ResNeXt-50 on SCDB.
- Score: 4.9449660544238085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Human-centric explainability of AI-based Decision Support Systems (DSS) using
visual input modalities is directly related to reliability and practicality of
such algorithms. An otherwise accurate and robust DSS might not enjoy trust of
experts in critical application areas if it is not able to provide reasonable
justification of its predictions. This paper introduces Concept Localization
Maps (CLMs), which is a novel approach towards explainable image classifiers
employed as DSS. CLMs extend Concept Activation Vectors (CAVs) by locating
significant regions corresponding to a learned concept in the latent space of a
trained image classifier. They provide qualitative and quantitative assurance
of a classifier's ability to learn and focus on similar concepts important for
humans during image recognition. To better understand the effectiveness of the
proposed method, we generated a new synthetic dataset called Simple Concept
DataBase (SCDB) that includes annotations for 10 distinguishable concepts, and
made it publicly available. We evaluated our proposed method on SCDB as well as
a real-world dataset called CelebA. We achieved localization recall of above
80% for most relevant concepts and average recall above 60% for all concepts
using SE-ResNeXt-50 on SCDB. Our results on both datasets show great promise of
CLMs for easing acceptance of DSS in practice.
Related papers
- Decoding Dark Matter: Specialized Sparse Autoencoders for Interpreting Rare Concepts in Foundation Models [26.748765050034876]
Specialized Sparse Autoencoders (SSAEs) illuminate elusive dark matter features by focusing on specific.
We show that SSAEs effectively capture subdomain tail concepts, exceeding the capabilities of general-purpose SAEs.
We showcase the practical utility of SSAEs in a case study on the Bias in Bios dataset, where SSAEs achieve a 12.5% increase in worst-group classification accuracy when applied to remove spurious gender information.
arXiv Detail & Related papers (2024-11-01T17:09:34Z) - Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery [52.498055901649025]
Concept Bottleneck Models (CBMs) have been proposed to address the 'black-box' problem of deep neural networks.
We propose a novel CBM approach -- called Discover-then-Name-CBM (DN-CBM) -- that inverts the typical paradigm.
Our concept extraction strategy is efficient, since it is agnostic to the downstream task, and uses concepts already known to the model.
arXiv Detail & Related papers (2024-07-19T17:50:11Z) - Interpretable Prognostics with Concept Bottleneck Models [5.939858158928473]
Concept Bottleneck Models (CBMs) are inherently interpretable neural network architectures based on concept explanations.
CBMs enable domain experts to intervene on the concept activations at test-time.
Our case studies demonstrate that the performance of CBMs can be on par or superior to black-box models.
arXiv Detail & Related papers (2024-05-27T18:15:40Z) - Knowledge graphs for empirical concept retrieval [1.06378109904813]
Concept-based explainable AI is promising as a tool to improve the understanding of complex models at the premises of a given user.
Here, we present a workflow for user-driven data collection in both text and image domains.
We test the retrieved concept datasets on two concept-based explainability methods, namely concept activation vectors (CAVs) and concept activation regions (CARs)
arXiv Detail & Related papers (2024-04-10T13:47:22Z) - Evaluating the Stability of Semantic Concept Representations in CNNs for
Robust Explainability [0.0]
This paper focuses on two stability goals when working with concept representations in computer vision CNNs.
The guiding use-case is a post-hoc explainability framework for object detection CNNs.
We propose a novel metric that considers both concept separation and consistency, and is to layer and concept representation dimensionality.
arXiv Detail & Related papers (2023-04-28T14:14:00Z) - Revealing Hidden Context Bias in Segmentation and Object Detection
through Concept-specific Explanations [14.77637281844823]
We propose the post-hoc eXplainable Artificial Intelligence method L-CRP to generate explanations that automatically identify and visualize relevant concepts learned, recognized and used by the model during inference as well as precisely locate them in input space.
We verify the faithfulness of our proposed technique by quantitatively comparing different concept attribution methods, and discuss the effect on explanation complexity on popular datasets such as CityScapes, Pascal VOC and MS COCO 2017.
arXiv Detail & Related papers (2022-11-21T13:12:23Z) - Visual Recognition with Deep Nearest Centroids [57.35144702563746]
We devise deep nearest centroids (DNC), a conceptually elegant yet surprisingly effective network for large-scale visual recognition.
Compared with parametric counterparts, DNC performs better on image classification (CIFAR-10, ImageNet) and greatly boots pixel recognition (ADE20K, Cityscapes)
arXiv Detail & Related papers (2022-09-15T15:47:31Z) - Impact of a DCT-driven Loss in Attention-based Knowledge-Distillation
for Scene Recognition [64.29650787243443]
We propose and analyse the use of a 2D frequency transform of the activation maps before transferring them.
This strategy enhances knowledge transferability in tasks such as scene recognition.
We publicly release the training and evaluation framework used along this paper at http://www.vpu.eps.uam.es/publications/DCTBasedKDForSceneRecognition.
arXiv Detail & Related papers (2022-05-04T11:05:18Z) - Evaluation of Self-taught Learning-based Representations for Facial
Emotion Recognition [62.30451764345482]
This work describes different strategies to generate unsupervised representations obtained through the concept of self-taught learning for facial emotion recognition.
The idea is to create complementary representations promoting diversity by varying the autoencoders' initialization, architecture, and training data.
Experimental results on Jaffe and Cohn-Kanade datasets using a leave-one-subject-out protocol show that FER methods based on the proposed diverse representations compare favorably against state-of-the-art approaches.
arXiv Detail & Related papers (2022-04-26T22:48:15Z) - Inter-class Discrepancy Alignment for Face Recognition [55.578063356210144]
We propose a unified framework calledInter-class DiscrepancyAlignment(IDA)
IDA-DAO is used to align the similarity scores considering the discrepancy between the images and its neighbors.
IDA-SSE can provide convincing inter-class neighbors by introducing virtual candidate images generated with GAN.
arXiv Detail & Related papers (2021-03-02T08:20:08Z) - Identity-Aware Attribute Recognition via Real-Time Distributed Inference
in Mobile Edge Clouds [53.07042574352251]
We design novel models for pedestrian attribute recognition with re-ID in an MEC-enabled camera monitoring system.
We propose a novel inference framework with a set of distributed modules, by jointly considering the attribute recognition and person re-ID.
We then devise a learning-based algorithm for the distributions of the modules of the proposed distributed inference framework.
arXiv Detail & Related papers (2020-08-12T12:03:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.