This Looks Like Those: Illuminating Prototypical Concepts Using Multiple
Visualizations
- URL: http://arxiv.org/abs/2310.18589v1
- Date: Sat, 28 Oct 2023 04:54:48 GMT
- Title: This Looks Like Those: Illuminating Prototypical Concepts Using Multiple
Visualizations
- Authors: Chiyu Ma, Brandon Zhao, Chaofan Chen, Cynthia Rudin
- Abstract summary: ProtoConcepts is a method for interpretable image classification combining deep learning and case-based reasoning.
Our proposed method modifies the architecture of prototype-based networks to instead learn concepts which are visualized using multiple image patches.
Our experiments show that our this looks like those'' reasoning process can be applied as a modification to a wide range of existing prototypical image classification networks.
- Score: 19.724372592639774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present ProtoConcepts, a method for interpretable image classification
combining deep learning and case-based reasoning using prototypical parts.
Existing work in prototype-based image classification uses a ``this looks like
that'' reasoning process, which dissects a test image by finding prototypical
parts and combining evidence from these prototypes to make a final
classification. However, all of the existing prototypical part-based image
classifiers provide only one-to-one comparisons, where a single training image
patch serves as a prototype to compare with a part of our test image. With
these single-image comparisons, it can often be difficult to identify the
underlying concept being compared (e.g., ``is it comparing the color or the
shape?''). Our proposed method modifies the architecture of prototype-based
networks to instead learn prototypical concepts which are visualized using
multiple image patches. Having multiple visualizations of the same prototype
allows us to more easily identify the concept captured by that prototype (e.g.,
``the test image and the related training patches are all the same shade of
blue''), and allows our model to create richer, more interpretable visual
explanations. Our experiments show that our ``this looks like those'' reasoning
process can be applied as a modification to a wide range of existing
prototypical image classification networks while achieving comparable accuracy
on benchmark datasets.
Related papers
- Interpretable Image Classification with Adaptive Prototype-based Vision Transformers [37.62530032165594]
We present ProtoViT, a method for interpretable image classification combining deep learning and case-based reasoning.
Our model integrates Vision Transformer (ViT) backbones into prototype based models, while offering spatially deformed prototypes.
Our experiments show that our model can generally achieve higher performance than the existing prototype based models.
arXiv Detail & Related papers (2024-10-28T04:33:28Z) - Mind the Gap Between Prototypes and Images in Cross-domain Finetuning [64.97317635355124]
We propose a contrastive prototype-image adaptation (CoPA) to adapt different transformations respectively for prototypes and images.
Experiments on Meta-Dataset demonstrate that CoPA achieves the state-of-the-art performance more efficiently.
arXiv Detail & Related papers (2024-10-16T11:42:11Z) - Sanity checks for patch visualisation in prototype-based image
classification [0.0]
We show that the visualisation methods implemented in ProtoPNet and ProtoTree do not correctly identify the regions of interest inside of the images.
We also demonstrate quantitatively that this issue can be mitigated by using other saliency methods that provide more faithful image patches.
arXiv Detail & Related papers (2023-10-25T08:13:02Z) - Rethinking Person Re-identification from a Projection-on-Prototypes
Perspective [84.24742313520811]
Person Re-IDentification (Re-ID) as a retrieval task, has achieved tremendous development over the past decade.
We propose a new baseline ProNet, which innovatively reserves the function of the classifier at the inference stage.
Experiments on four benchmarks demonstrate that our proposed ProNet is simple yet effective, and significantly beats previous baselines.
arXiv Detail & Related papers (2023-08-21T13:38:10Z) - Rethinking Semantic Segmentation: A Prototype View [126.59244185849838]
We present a nonparametric semantic segmentation model based on non-learnable prototypes.
Our framework yields compelling results over several datasets.
We expect this work will provoke a rethink of the current de facto semantic segmentation model design.
arXiv Detail & Related papers (2022-03-28T21:15:32Z) - Interpretable Image Classification with Differentiable Prototypes
Assignment [7.660883761395447]
We introduce ProtoPool, an interpretable image classification model with a pool of prototypes shared by the classes.
It is obtained by introducing a fully differentiable assignment of prototypes to particular classes.
We show that ProtoPool obtains state-of-the-art accuracy on the CUB-200-2011 and the Stanford Cars datasets, substantially reducing the number of prototypes.
arXiv Detail & Related papers (2021-12-06T10:03:32Z) - Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable Prototypes [7.8515366468594765]
We present a deformable part network (Deformable ProtoPNet) that integrates the power of deep learning and the interpretability of case-based reasoning.
This model classifies input images by comparing them with prototypes learned during training, yielding explanations in the form of "this looks like that"
arXiv Detail & Related papers (2021-11-29T22:38:13Z) - APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic
Segmentation [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a given query image with only a few labeled support images.
Most advanced solutions exploit a metric learning framework that performs segmentation through matching each query feature to a learned class-specific prototype.
We present an adaptive prototype representation by introducing class-specific and class-agnostic prototypes.
arXiv Detail & Related papers (2021-11-24T04:38:37Z) - Aligning Visual Prototypes with BERT Embeddings for Few-Shot Learning [48.583388368897126]
Few-shot learning is the task of learning to recognize previously unseen categories of images.
We propose a method that takes into account the names of the image classes.
arXiv Detail & Related papers (2021-05-21T08:08:28Z) - SCNet: Enhancing Few-Shot Semantic Segmentation by Self-Contrastive
Background Prototypes [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a query image with only a few annotated examples.
Most of advanced solutions exploit a metric learning framework that performs segmentation through matching each pixel to a learned foreground prototype.
This framework suffers from biased classification due to incomplete construction of sample pairs with the foreground prototype only.
arXiv Detail & Related papers (2021-04-19T11:21:47Z) - This Looks Like That, Because ... Explaining Prototypes for
Interpretable Image Recognition [4.396860522241307]
We argue that prototypes should be explained.
Our method clarifies the meaning of a prototype by quantifying the influence of colour hue, shape, texture, contrast and saturation.
By explaining such'misleading' prototypes, we improve the interpretability and simulatability of a prototype-based classification model.
arXiv Detail & Related papers (2020-11-05T14:43:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.