ProtoPShare: Prototype Sharing for Interpretable Image Classification
and Similarity Discovery
- URL: http://arxiv.org/abs/2011.14340v1
- Date: Sun, 29 Nov 2020 11:23:05 GMT
- Title: ProtoPShare: Prototype Sharing for Interpretable Image Classification
and Similarity Discovery
- Authors: Dawid Rymarczyk, {\L}ukasz Struski, Jacek Tabor, Bartosz Zieli\'nski
- Abstract summary: We introduce ProtoPShare, a self-explained method that incorporates the paradigm of prototypical parts to explain its predictions.
The main novelty of the ProtoPShare is its ability to efficiently share prototypical parts between the classes thanks to our data-dependent merge-pruning.
We verify our findings on two datasets, the CUB-200-2011 and the Stanford Cars.
- Score: 9.36640530008137
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce ProtoPShare, a self-explained method that
incorporates the paradigm of prototypical parts to explain its predictions. The
main novelty of the ProtoPShare is its ability to efficiently share
prototypical parts between the classes thanks to our data-dependent
merge-pruning. Moreover, the prototypes are more consistent and the model is
more robust to image perturbations than the state of the art method ProtoPNet.
We verify our findings on two datasets, the CUB-200-2011 and the Stanford Cars.
Related papers
- Mind the Gap Between Prototypes and Images in Cross-domain Finetuning [64.97317635355124]
We propose a contrastive prototype-image adaptation (CoPA) to adapt different transformations respectively for prototypes and images.
Experiments on Meta-Dataset demonstrate that CoPA achieves the state-of-the-art performance more efficiently.
arXiv Detail & Related papers (2024-10-16T11:42:11Z) - Sparse Prototype Network for Explainable Pedestrian Behavior Prediction [60.80524827122901]
We present Sparse Prototype Network (SPN), an explainable method designed to simultaneously predict a pedestrian's future action, trajectory, and pose.
Regularized by mono-semanticity and clustering constraints, the prototypes learn consistent and human-understandable features.
arXiv Detail & Related papers (2024-10-16T03:33:40Z) - Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation [7.372346036256517]
Prototypical part learning is emerging as a promising approach for making semantic segmentation interpretable.
We propose a method for interpretable semantic segmentation that leverages multi-scale image representation for prototypical part learning.
Experiments conducted on Pascal VOC, Cityscapes, and ADE20K demonstrate that the proposed method increases model sparsity, improves interpretability over existing prototype-based methods, and narrows the performance gap with the non-interpretable counterpart models.
arXiv Detail & Related papers (2024-09-14T17:52:59Z) - This Looks Better than That: Better Interpretable Models with ProtoPNeXt [14.28283868577614]
Prototypical-part models are a popular interpretable alternative to black-box deep learning models for computer vision.
We create a new framework for integrating components of prototypical-part models -- ProtoPNeXt.
arXiv Detail & Related papers (2024-06-20T18:54:27Z) - ProtoArgNet: Interpretable Image Classification with Super-Prototypes and Argumentation [Technical Report] [17.223442899324482]
ProtoArgNet is a novel interpretable deep neural architecture for image classification in the spirit of prototypical-part-learning.
ProtoArgNet uses super-prototypes that combine prototypical-parts into a unified class representation.
We demonstrate on several datasets that ProtoArgNet outperforms state-of-the-art prototypical-part-learning approaches.
arXiv Detail & Related papers (2023-11-26T21:52:47Z) - Sanity checks and improvements for patch visualisation in
prototype-based image classification [0.0]
We perform an in-depth analysis of the visualisation methods implemented in two popular self-explaining models for visual classification based on prototypes.
We first show that such methods do not correctly identify the regions of interest inside of the images, and therefore do not reflect the model behaviour.
We discuss the implications of our findings for other prototype-based models sharing the same visualisation method.
arXiv Detail & Related papers (2023-01-20T15:13:04Z) - Interpretable Image Classification with Differentiable Prototypes
Assignment [7.660883761395447]
We introduce ProtoPool, an interpretable image classification model with a pool of prototypes shared by the classes.
It is obtained by introducing a fully differentiable assignment of prototypes to particular classes.
We show that ProtoPool obtains state-of-the-art accuracy on the CUB-200-2011 and the Stanford Cars datasets, substantially reducing the number of prototypes.
arXiv Detail & Related papers (2021-12-06T10:03:32Z) - Dual Prototypical Contrastive Learning for Few-shot Semantic
Segmentation [55.339405417090084]
We propose a dual prototypical contrastive learning approach tailored to the few-shot semantic segmentation (FSS) task.
The main idea is to encourage the prototypes more discriminative by increasing inter-class distance while reducing intra-class distance in prototype feature space.
We demonstrate that the proposed dual contrastive learning approach outperforms state-of-the-art FSS methods on PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2021-11-09T08:14:50Z) - Attentional Prototype Inference for Few-Shot Segmentation [128.45753577331422]
We propose attentional prototype inference (API), a probabilistic latent variable framework for few-shot segmentation.
We define a global latent variable to represent the prototype of each object category, which we model as a probabilistic distribution.
We conduct extensive experiments on four benchmarks, where our proposal obtains at least competitive and often better performance than state-of-the-art prototype-based methods.
arXiv Detail & Related papers (2021-05-14T06:58:44Z) - Prototypical Representation Learning for Relation Extraction [56.501332067073065]
This paper aims to learn predictive, interpretable, and robust relation representations from distantly-labeled data.
We learn prototypes for each relation from contextual information to best explore the intrinsic semantics of relations.
Results on several relation learning tasks show that our model significantly outperforms the previous state-of-the-art relational models.
arXiv Detail & Related papers (2021-03-22T08:11:43Z) - Part-aware Prototype Network for Few-shot Semantic Segmentation [50.581647306020095]
We propose a novel few-shot semantic segmentation framework based on the prototype representation.
Our key idea is to decompose the holistic class representation into a set of part-aware prototypes.
We develop a novel graph neural network model to generate and enhance the proposed part-aware prototypes.
arXiv Detail & Related papers (2020-07-13T11:03:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.