This Looks Like That, Because ... Explaining Prototypes for
Interpretable Image Recognition
- URL: http://arxiv.org/abs/2011.02863v2
- Date: Wed, 31 Mar 2021 07:13:23 GMT
- Title: This Looks Like That, Because ... Explaining Prototypes for
Interpretable Image Recognition
- Authors: Meike Nauta, Annemarie Jutte, Jesper Provoost, Christin Seifert
- Abstract summary: We argue that prototypes should be explained.
Our method clarifies the meaning of a prototype by quantifying the influence of colour hue, shape, texture, contrast and saturation.
By explaining such'misleading' prototypes, we improve the interpretability and simulatability of a prototype-based classification model.
- Score: 4.396860522241307
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image recognition with prototypes is considered an interpretable alternative
for black box deep learning models. Classification depends on the extent to
which a test image "looks like" a prototype. However, perceptual similarity for
humans can be different from the similarity learned by the classification
model. Hence, only visualising prototypes can be insufficient for a user to
understand what a prototype exactly represents, and why the model considers a
prototype and an image to be similar. We address this ambiguity and argue that
prototypes should be explained. We improve interpretability by automatically
enhancing visual prototypes with textual quantitative information about visual
characteristics deemed important by the classification model. Specifically, our
method clarifies the meaning of a prototype by quantifying the influence of
colour hue, shape, texture, contrast and saturation and can generate both
global and local explanations. Because of the generality of our approach, it
can improve the interpretability of any similarity-based method for
prototypical image recognition. In our experiments, we apply our method to the
existing Prototypical Part Network (ProtoPNet). Our analysis confirms that the
global explanations are generalisable, and often correspond to the visually
perceptible properties of a prototype. Our explanations are especially relevant
for prototypes which might have been interpreted incorrectly otherwise. By
explaining such 'misleading' prototypes, we improve the interpretability and
simulatability of a prototype-based classification model. We also use our
method to check whether visually similar prototypes have similar explanations,
and are able to discover redundancy. Code is available at
https://github.com/M-Nauta/Explaining_Prototypes .
Related papers
- This Looks Like Those: Illuminating Prototypical Concepts Using Multiple
Visualizations [19.724372592639774]
ProtoConcepts is a method for interpretable image classification combining deep learning and case-based reasoning.
Our proposed method modifies the architecture of prototype-based networks to instead learn concepts which are visualized using multiple image patches.
Our experiments show that our this looks like those'' reasoning process can be applied as a modification to a wide range of existing prototypical image classification networks.
arXiv Detail & Related papers (2023-10-28T04:54:48Z) - Sanity checks for patch visualisation in prototype-based image
classification [0.0]
We show that the visualisation methods implemented in ProtoPNet and ProtoTree do not correctly identify the regions of interest inside of the images.
We also demonstrate quantitatively that this issue can be mitigated by using other saliency methods that provide more faithful image patches.
arXiv Detail & Related papers (2023-10-25T08:13:02Z) - Rethinking Person Re-identification from a Projection-on-Prototypes
Perspective [84.24742313520811]
Person Re-IDentification (Re-ID) as a retrieval task, has achieved tremendous development over the past decade.
We propose a new baseline ProNet, which innovatively reserves the function of the classifier at the inference stage.
Experiments on four benchmarks demonstrate that our proposed ProNet is simple yet effective, and significantly beats previous baselines.
arXiv Detail & Related papers (2023-08-21T13:38:10Z) - Towards Human-Interpretable Prototypes for Visual Assessment of Image
Classification Models [9.577509224534323]
We need models which are interpretable-by-design built on a reasoning process similar to humans.
ProtoPNet claims to discover visually meaningful prototypes in an unsupervised way.
We find that these prototypes still have a long way ahead towards definite explanations.
arXiv Detail & Related papers (2022-11-22T11:01:22Z) - Rethinking Semantic Segmentation: A Prototype View [126.59244185849838]
We present a nonparametric semantic segmentation model based on non-learnable prototypes.
Our framework yields compelling results over several datasets.
We expect this work will provoke a rethink of the current de facto semantic segmentation model design.
arXiv Detail & Related papers (2022-03-28T21:15:32Z) - Interpretable Image Classification with Differentiable Prototypes
Assignment [7.660883761395447]
We introduce ProtoPool, an interpretable image classification model with a pool of prototypes shared by the classes.
It is obtained by introducing a fully differentiable assignment of prototypes to particular classes.
We show that ProtoPool obtains state-of-the-art accuracy on the CUB-200-2011 and the Stanford Cars datasets, substantially reducing the number of prototypes.
arXiv Detail & Related papers (2021-12-06T10:03:32Z) - Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable Prototypes [7.8515366468594765]
We present a deformable part network (Deformable ProtoPNet) that integrates the power of deep learning and the interpretability of case-based reasoning.
This model classifies input images by comparing them with prototypes learned during training, yielding explanations in the form of "this looks like that"
arXiv Detail & Related papers (2021-11-29T22:38:13Z) - Dual Prototypical Contrastive Learning for Few-shot Semantic
Segmentation [55.339405417090084]
We propose a dual prototypical contrastive learning approach tailored to the few-shot semantic segmentation (FSS) task.
The main idea is to encourage the prototypes more discriminative by increasing inter-class distance while reducing intra-class distance in prototype feature space.
We demonstrate that the proposed dual contrastive learning approach outperforms state-of-the-art FSS methods on PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2021-11-09T08:14:50Z) - Prototypical Representation Learning for Relation Extraction [56.501332067073065]
This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data.
We learn prototypes for each relation from contextual information to best explore the intrinsic semantics of relations.
Results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art relational models.
arXiv Detail & Related papers (2021-03-22T08:11:43Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z) - Learning Sparse Prototypes for Text Generation [120.38555855991562]
Prototype-driven text generation is inefficient at test time as a result of needing to store and index the entire training corpus.
We propose a novel generative model that automatically learns a sparse prototype support set that achieves strong language modeling performance.
In experiments, our model outperforms previous prototype-driven language models while achieving up to a 1000x memory reduction.
arXiv Detail & Related papers (2020-06-29T19:41:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.