Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable Prototypes
- URL: http://arxiv.org/abs/2111.15000v3
- Date: Thu, 2 May 2024 20:21:45 GMT
- Title: Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable Prototypes
- Authors: Jon Donnelly, Alina Jade Barnett, Chaofan Chen,
- Abstract summary: We present a deformable part network (Deformable ProtoPNet) that integrates the power of deep learning and the interpretability of case-based reasoning.
This model classifies input images by comparing them with prototypes learned during training, yielding explanations in the form of "this looks like that"
- Score: 7.8515366468594765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a deformable prototypical part network (Deformable ProtoPNet), an interpretable image classifier that integrates the power of deep learning and the interpretability of case-based reasoning. This model classifies input images by comparing them with prototypes learned during training, yielding explanations in the form of "this looks like that." However, while previous methods use spatially rigid prototypes, we address this shortcoming by proposing spatially flexible prototypes. Each prototype is made up of several prototypical parts that adaptively change their relative spatial positions depending on the input image. Consequently, a Deformable ProtoPNet can explicitly capture pose variations and context, improving both model accuracy and the richness of explanations provided. Compared to other case-based interpretable models using prototypes, our approach achieves state-of-the-art accuracy and gives an explanation with greater context. The code is available at https://github.com/jdonnelly36/Deformable-ProtoPNet.
Related papers
- Interpretable Image Classification with Adaptive Prototype-based Vision Transformers [37.62530032165594]
We present ProtoViT, a method for interpretable image classification combining deep learning and case-based reasoning.
Our model integrates Vision Transformer (ViT) backbones into prototype based models, while offering spatially deformed prototypes.
Our experiments show that our model can generally achieve higher performance than the existing prototype based models.
arXiv Detail & Related papers (2024-10-28T04:33:28Z) - Mind the Gap Between Prototypes and Images in Cross-domain Finetuning [64.97317635355124]
We propose a contrastive prototype-image adaptation (CoPA) to adapt different transformations respectively for prototypes and images.
Experiments on Meta-Dataset demonstrate that CoPA achieves the state-of-the-art performance more efficiently.
arXiv Detail & Related papers (2024-10-16T11:42:11Z) - Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation [7.372346036256517]
Prototypical part learning is emerging as a promising approach for making semantic segmentation interpretable.
We propose a method for interpretable semantic segmentation that leverages multi-scale image representation for prototypical part learning.
Experiments conducted on Pascal VOC, Cityscapes, and ADE20K demonstrate that the proposed method increases model sparsity, improves interpretability over existing prototype-based methods, and narrows the performance gap with the non-interpretable counterpart models.
arXiv Detail & Related papers (2024-09-14T17:52:59Z) - Query-guided Prototype Evolution Network for Few-Shot Segmentation [85.75516116674771]
We present a new method that integrates query features into the generation process of foreground and background prototypes.
Experimental results on the PASCAL-$5i$ and mirroring-$20i$ datasets attest to the substantial enhancements achieved by QPENet.
arXiv Detail & Related papers (2024-03-11T07:50:40Z) - ProtoArgNet: Interpretable Image Classification with Super-Prototypes and Argumentation [Technical Report] [17.223442899324482]
ProtoArgNet is a novel interpretable deep neural architecture for image classification in the spirit of prototypical-part-learning.
ProtoArgNet uses super-prototypes that combine prototypical-parts into a unified class representation.
We demonstrate on several datasets that ProtoArgNet outperforms state-of-the-art prototypical-part-learning approaches.
arXiv Detail & Related papers (2023-11-26T21:52:47Z) - This Looks Like Those: Illuminating Prototypical Concepts Using Multiple
Visualizations [19.724372592639774]
ProtoConcepts is a method for interpretable image classification combining deep learning and case-based reasoning.
Our proposed method modifies the architecture of prototype-based networks to instead learn concepts which are visualized using multiple image patches.
Our experiments show that our this looks like those'' reasoning process can be applied as a modification to a wide range of existing prototypical image classification networks.
arXiv Detail & Related papers (2023-10-28T04:54:48Z) - Rethinking Person Re-identification from a Projection-on-Prototypes
Perspective [84.24742313520811]
Person Re-IDentification (Re-ID) as a retrieval task, has achieved tremendous development over the past decade.
We propose a new baseline ProNet, which innovatively reserves the function of the classifier at the inference stage.
Experiments on four benchmarks demonstrate that our proposed ProNet is simple yet effective, and significantly beats previous baselines.
arXiv Detail & Related papers (2023-08-21T13:38:10Z) - Rethinking Semantic Segmentation: A Prototype View [126.59244185849838]
We present a nonparametric semantic segmentation model based on non-learnable prototypes.
Our framework yields compelling results over several datasets.
We expect this work will provoke a rethink of the current de facto semantic segmentation model design.
arXiv Detail & Related papers (2022-03-28T21:15:32Z) - Dual Prototypical Contrastive Learning for Few-shot Semantic
Segmentation [55.339405417090084]
We propose a dual prototypical contrastive learning approach tailored to the few-shot semantic segmentation (FSS) task.
The main idea is to encourage the prototypes more discriminative by increasing inter-class distance while reducing intra-class distance in prototype feature space.
We demonstrate that the proposed dual contrastive learning approach outperforms state-of-the-art FSS methods on PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2021-11-09T08:14:50Z) - This Looks Like That, Because ... Explaining Prototypes for
Interpretable Image Recognition [4.396860522241307]
We argue that prototypes should be explained.
Our method clarifies the meaning of a prototype by quantifying the influence of colour hue, shape, texture, contrast and saturation.
By explaining such'misleading' prototypes, we improve the interpretability and simulatability of a prototype-based classification model.
arXiv Detail & Related papers (2020-11-05T14:43:07Z) - Learning Sparse Prototypes for Text Generation [120.38555855991562]
Prototype-driven text generation is inefficient at test time as a result of needing to store and index the entire training corpus.
We propose a novel generative model that automatically learns a sparse prototype support set that achieves strong language modeling performance.
In experiments, our model outperforms previous prototype-driven language models while achieving up to a 1000x memory reduction.
arXiv Detail & Related papers (2020-06-29T19:41:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.