Learning Support and Trivial Prototypes for Interpretable Image
Classification
- URL: http://arxiv.org/abs/2301.04011v4
- Date: Sun, 22 Oct 2023 14:20:27 GMT
- Title: Learning Support and Trivial Prototypes for Interpretable Image
Classification
- Authors: Chong Wang, Yuyuan Liu, Yuanhong Chen, Fengbei Liu, Yu Tian, Davis J.
McCarthy, Helen Frazer, Gustavo Carneiro
- Abstract summary: Prototypical part network (ProtoPNet) methods have been designed to achieve interpretable classification.
We aim to improve the classification of ProtoPNet with a new method to learn support prototypes that lie near the classification boundary in the feature space.
- Score: 19.00622056840535
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Prototypical part network (ProtoPNet) methods have been designed to achieve
interpretable classification by associating predictions with a set of training
prototypes, which we refer to as trivial prototypes because they are trained to
lie far from the classification boundary in the feature space. Note that it is
possible to make an analogy between ProtoPNet and support vector machine (SVM)
given that the classification from both methods relies on computing similarity
with a set of training points (i.e., trivial prototypes in ProtoPNet, and
support vectors in SVM). However, while trivial prototypes are located far from
the classification boundary, support vectors are located close to this
boundary, and we argue that this discrepancy with the well-established SVM
theory can result in ProtoPNet models with inferior classification accuracy. In
this paper, we aim to improve the classification of ProtoPNet with a new method
to learn support prototypes that lie near the classification boundary in the
feature space, as suggested by the SVM theory. In addition, we target the
improvement of classification results with a new model, named ST-ProtoPNet,
which exploits our support prototypes and the trivial prototypes to provide
more effective classification. Experimental results on CUB-200-2011, Stanford
Cars, and Stanford Dogs datasets demonstrate that ST-ProtoPNet achieves
state-of-the-art classification accuracy and interpretability results. We also
show that the proposed support prototypes tend to be better localised in the
object of interest rather than in the background region.
Related papers
- An Enhanced Federated Prototype Learning Method under Domain Shift [36.73020712815063]
Federated Learning (FL) allows collaborative machine learning training without sharing private data.
Recent paper introduces variance-aware dual-level prototype clustering and uses a novel $alpha$-sparsity prototype loss.
Evaluations on the Digit-5, Office-10, and DomainNet datasets show that our method performs better than existing approaches.
arXiv Detail & Related papers (2024-09-27T09:28:27Z) - Multi-Scale Grouped Prototypes for Interpretable Semantic Segmentation [7.372346036256517]
Prototypical part learning is emerging as a promising approach for making semantic segmentation interpretable.
We propose a method for interpretable semantic segmentation that leverages multi-scale image representation for prototypical part learning.
Experiments conducted on Pascal VOC, Cityscapes, and ADE20K demonstrate that the proposed method increases model sparsity, improves interpretability over existing prototype-based methods, and narrows the performance gap with the non-interpretable counterpart models.
arXiv Detail & Related papers (2024-09-14T17:52:59Z) - Rethinking Few-shot 3D Point Cloud Semantic Segmentation [62.80639841429669]
This paper revisits few-shot 3D point cloud semantic segmentation (FS-PCS)
We focus on two significant issues in the state-of-the-art: foreground leakage and sparse point distribution.
To address these issues, we introduce a standardized FS-PCS setting, upon which a new benchmark is built.
arXiv Detail & Related papers (2024-03-01T15:14:47Z) - Rethinking Semantic Segmentation: A Prototype View [126.59244185849838]
We present a nonparametric semantic segmentation model based on non-learnable prototypes.
Our framework yields compelling results over several datasets.
We expect this work will provoke a rethink of the current de facto semantic segmentation model design.
arXiv Detail & Related papers (2022-03-28T21:15:32Z) - Dual Prototypical Contrastive Learning for Few-shot Semantic
Segmentation [55.339405417090084]
We propose a dual prototypical contrastive learning approach tailored to the few-shot semantic segmentation (FSS) task.
The main idea is to encourage the prototypes more discriminative by increasing inter-class distance while reducing intra-class distance in prototype feature space.
We demonstrate that the proposed dual contrastive learning approach outperforms state-of-the-art FSS methods on PASCAL-5i and COCO-20i datasets.
arXiv Detail & Related papers (2021-11-09T08:14:50Z) - LFD-ProtoNet: Prototypical Network Based on Local Fisher Discriminant
Analysis for Few-shot Learning [98.64231310584614]
The prototypical network (ProtoNet) is a few-shot learning framework that performs metric learning and classification using the distance to prototype representations of each class.
We show the usefulness of the proposed method by theoretically providing an expected risk bound and empirically demonstrating its superior classification accuracy on miniImageNet and tieredImageNet.
arXiv Detail & Related papers (2020-06-15T11:56:30Z) - Fine-Grained Visual Classification with Efficient End-to-end
Localization [49.9887676289364]
We present an efficient localization module that can be fused with a classification network in an end-to-end setup.
We evaluate the new model on the three benchmark datasets CUB200-2011, Stanford Cars and FGVC-Aircraft.
arXiv Detail & Related papers (2020-05-11T14:07:06Z) - Prototypical Contrastive Learning of Unsupervised Representations [171.3046900127166]
Prototypical Contrastive Learning (PCL) is an unsupervised representation learning method.
PCL implicitly encodes semantic structures of the data into the learned embedding space.
PCL outperforms state-of-the-art instance-wise contrastive learning methods on multiple benchmarks.
arXiv Detail & Related papers (2020-05-11T09:53:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.