Explaining Classifiers by Constructing Familiar Concepts
- URL: http://arxiv.org/abs/2203.04109v1
- Date: Mon, 7 Mar 2022 12:21:06 GMT
- Title: Explaining Classifiers by Constructing Familiar Concepts
- Authors: Johannes Schneider and Michail Vlachos
- Abstract summary: We propose a decoder that transforms the incomprehensible representation of neurons into a representation that is more similar to the domain a human is familiar with.
An extension of ClaDec allows trading comprehensibility and fidelity.
We show that ClaDec tends to highlight more relevant input areas to classification though outcomes depend on architecture.
- Score: 2.7514191327409714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Interpreting a large number of neurons in deep learning is difficult. Our
proposed `CLAssifier-DECoder' architecture (ClaDec) facilitates the
understanding of the output of an arbitrary layer of neurons or subsets
thereof. It uses a decoder that transforms the incomprehensible representation
of the given neurons to a representation that is more similar to the domain a
human is familiar with. In an image recognition problem, one can recognize what
information (or concepts) a layer maintains by contrasting reconstructed images
of ClaDec with those of a conventional auto-encoder(AE) serving as reference.
An extension of ClaDec allows trading comprehensibility and fidelity. We
evaluate our approach for image classification using convolutional neural
networks. We show that reconstructed visualizations using encodings from a
classifier capture more relevant classification information than conventional
AEs. This holds although AEs contain more information on the original input.
Our user study highlights that even non-experts can identify a diverse set of
concepts contained in images that are relevant (or irrelevant) for the
classifier. We also compare against saliency based methods that focus on pixel
relevance rather than concepts. We show that ClaDec tends to highlight more
relevant input areas to classification though outcomes depend on classifier
architecture. Code is at \url{https://github.com/JohnTailor/ClaDec}
Related papers
- DXAI: Explaining Classification by Image Decomposition [4.013156524547072]
We propose a new way to visualize neural network classification through a decomposition-based explainable AI (DXAI)
Instead of providing an explanation heatmap, our method yields a decomposition of the image into class-agnostic and class-distinct parts.
arXiv Detail & Related papers (2023-12-30T20:52:20Z) - Improving Deep Representation Learning via Auxiliary Learnable Target Coding [69.79343510578877]
This paper introduces a novel learnable target coding as an auxiliary regularization of deep representation learning.
Specifically, a margin-based triplet loss and a correlation consistency loss on the proposed target codes are designed to encourage more discriminative representations.
arXiv Detail & Related papers (2023-05-30T01:38:54Z) - AttriCLIP: A Non-Incremental Learner for Incremental Knowledge Learning [53.32576252950481]
Continual learning aims to enable a model to incrementally learn knowledge from sequentially arrived data.
In this paper, we propose a non-incremental learner, named AttriCLIP, to incrementally extract knowledge of new classes or tasks.
arXiv Detail & Related papers (2023-05-19T07:39:17Z) - Visual Recognition with Deep Nearest Centroids [57.35144702563746]
We devise deep nearest centroids (DNC), a conceptually elegant yet surprisingly effective network for large-scale visual recognition.
Compared with parametric counterparts, DNC performs better on image classification (CIFAR-10, ImageNet) and greatly boots pixel recognition (ADE20K, Cityscapes)
arXiv Detail & Related papers (2022-09-15T15:47:31Z) - Deep ensembles in bioimage segmentation [74.01883650587321]
In this work, we propose an ensemble of convolutional neural networks (CNNs)
In ensemble methods, many different models are trained and then used for classification, the ensemble aggregates the outputs of the single classifiers.
The proposed ensemble is implemented by combining different backbone networks using the DeepLabV3+ and HarDNet environment.
arXiv Detail & Related papers (2021-12-24T05:54:21Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Explaining Neural Networks by Decoding Layer Activations [3.6245632117657816]
We present a CLAssifier-DECoder' architecture (emphClaDec) which facilitates the comprehension of the output of an arbitrary layer in a neural network (NN)
It uses a decoder to transform the non-interpretable representation of the given layer to a representation more similar to the domain a human is familiar with.
In an image recognition problem, one can recognize what information is represented by a layer by contrasting reconstructed images of emphClaDec with those of a conventional auto-encoder(AE) serving as reference.
arXiv Detail & Related papers (2020-05-27T20:22:10Z) - Hierarchical Image Classification using Entailment Cone Embeddings [68.82490011036263]
We first inject label-hierarchy knowledge into an arbitrary CNN-based classifier.
We empirically show that availability of such external semantic information in conjunction with the visual semantics from images boosts overall performance.
arXiv Detail & Related papers (2020-04-02T10:22:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.