ProtoPShare: Prototype Sharing for Interpretable Image Classification
and Similarity Discovery
- URL: http://arxiv.org/abs/2011.14340v1
- Date: Sun, 29 Nov 2020 11:23:05 GMT
- Title: ProtoPShare: Prototype Sharing for Interpretable Image Classification
and Similarity Discovery
- Authors: Dawid Rymarczyk, {\L}ukasz Struski, Jacek Tabor, Bartosz Zieli\'nski
- Abstract summary: We introduce ProtoPShare, a self-explained method that incorporates the paradigm of prototypical parts to explain its predictions.
The main novelty of the ProtoPShare is its ability to efficiently share prototypical parts between the classes thanks to our data-dependent merge-pruning.
We verify our findings on two datasets, the CUB-200-2011 and the Stanford Cars.
- Score: 9.36640530008137
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce ProtoPShare, a self-explained method that
incorporates the paradigm of prototypical parts to explain its predictions. The
main novelty of the ProtoPShare is its ability to efficiently share
prototypical parts between the classes thanks to our data-dependent
merge-pruning. Moreover, the prototypes are more consistent and the model is
more robust to image perturbations than the state of the art method ProtoPNet.
We verify our findings on two datasets, the CUB-200-2011 and the Stanford Cars.
Related papers
- This Looks Better than That: Better Interpretable Models with ProtoPNeXt [14.28283868577614]
Prototypical-part models are a popular interpretable alternative to black-box deep learning models for computer vision.
We create a new framework for integrating components of prototypical-part models -- ProtoPNeXt.
arXiv Detail & Related papers (2024-06-20T18:54:27Z) - Mixture of Gaussian-distributed Prototypes with Generative Modelling for Interpretable and Trustworthy Image Recognition [15.685927265270085]
We present a new generative paradigm to learn prototype distributions, termed as Mixture of Gaussian-distributed Prototypes (MGProto)
MGProto achieves state-of-the-art image recognition and OoD detection performances, while providing encouraging interpretability results.
arXiv Detail & Related papers (2023-11-30T11:01:37Z) - With a Little Help from your own Past: Prototypical Memory Networks for
Image Captioning [47.96387857237473]
We devise a network which can perform attention over activations obtained while processing other training samples.
Our memory models the distribution of past keys and values through the definition of prototype vectors.
We demonstrate that our proposal can increase the performance of an encoder-decoder Transformer by 3.7 CIDEr points both when training in cross-entropy only and when fine-tuning with self-critical sequence training.
arXiv Detail & Related papers (2023-08-23T18:53:00Z) - Sanity checks and improvements for patch visualisation in
prototype-based image classification [0.0]
We perform an in-depth analysis of the visualisation methods implemented in two popular self-explaining models for visual classification based on prototypes.
We first show that such methods do not correctly identify the regions of interest inside of the images, and therefore do not reflect the model behaviour.
We discuss the implications of our findings for other prototype-based models sharing the same visualisation method.
arXiv Detail & Related papers (2023-01-20T15:13:04Z) - Interpretable Image Classification with Differentiable Prototypes
Assignment [7.660883761395447]
We introduce ProtoPool, an interpretable image classification model with a pool of prototypes shared by the classes.
It is obtained by introducing a fully differentiable assignment of prototypes to particular classes.
We show that ProtoPool obtains state-of-the-art accuracy on the CUB-200-2011 and the Stanford Cars datasets, substantially reducing the number of prototypes.
arXiv Detail & Related papers (2021-12-06T10:03:32Z) - Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable Prototypes [7.8515366468594765]
We present a deformable part network (Deformable ProtoPNet) that integrates the power of deep learning and the interpretability of case-based reasoning.
This model classifies input images by comparing them with prototypes learned during training, yielding explanations in the form of "this looks like that"
arXiv Detail & Related papers (2021-11-29T22:38:13Z) - Dual Prototypical Contrastive Learning for Few-shot Semantic
Segmentation [55.339405417090084]
We propose a dual prototypical contrastive learning approach tailored to the few-shot semantic segmentation (FSS) task.
The main idea is to encourage the prototypes more discriminative by increasing inter-class distance while reducing intra-class distance in prototype feature space.
We demonstrate that the proposed dual contrastive learning approach outperforms state-of-the-art FSS methods on PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2021-11-09T08:14:50Z) - Attentional Prototype Inference for Few-Shot Segmentation [128.45753577331422]
We propose attentional prototype inference (API), a probabilistic latent variable framework for few-shot segmentation.
We define a global latent variable to represent the prototype of each object category, which we model as a probabilistic distribution.
We conduct extensive experiments on four benchmarks, where our proposal obtains at least competitive and often better performance than state-of-the-art prototype-based methods.
arXiv Detail & Related papers (2021-05-14T06:58:44Z) - SCNet: Enhancing Few-Shot Semantic Segmentation by Self-Contrastive
Background Prototypes [56.387647750094466]
Few-shot semantic segmentation aims to segment novel-class objects in a query image with only a few annotated examples.
Most of advanced solutions exploit a metric learning framework that performs segmentation through matching each pixel to a learned foreground prototype.
This framework suffers from biased classification due to incomplete construction of sample pairs with the foreground prototype only.
arXiv Detail & Related papers (2021-04-19T11:21:47Z) - Prototypical Representation Learning for Relation Extraction [56.501332067073065]
This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data.
We learn prototypes for each relation from contextual information to best explore the intrinsic semantics of relations.
Results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art relational models.
arXiv Detail & Related papers (2021-03-22T08:11:43Z) - Part-aware Prototype Network for Few-shot Semantic Segmentation [50.581647306020095]
We propose a novel few-shot semantic segmentation framework based on the prototype representation.
Our key idea is to decompose the holistic class representation into a set of part-aware prototypes.
We develop a novel graph neural network model to generate and enhance the proposed part-aware prototypes.
arXiv Detail & Related papers (2020-07-13T11:03:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.