This actually looks like that: Proto-BagNets for local and global interpretability-by-design
- URL: http://arxiv.org/abs/2406.15168v2
- Date: Mon, 24 Jun 2024 08:13:07 GMT
- Title: This actually looks like that: Proto-BagNets for local and global interpretability-by-design
- Authors: Kerol Djoumessi, Bubacarr Bah, Laura Kühlewein, Philipp Berens, Lisa Koch,
- Abstract summary: Interpretability is a key requirement for the use of machine learning models in high-stakes applications.
We introduce Proto-BagNets, an interpretable-by-design prototype-based model.
Proto-BagNet provides faithful, accurate, and clinically meaningful local and global explanations.
- Score: 5.037593461859481
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interpretability is a key requirement for the use of machine learning models in high-stakes applications, including medical diagnosis. Explaining black-box models mostly relies on post-hoc methods that do not faithfully reflect the model's behavior. As a remedy, prototype-based networks have been proposed, but their interpretability is limited as they have been shown to provide coarse, unreliable, and imprecise explanations. In this work, we introduce Proto-BagNets, an interpretable-by-design prototype-based model that combines the advantages of bag-of-local feature models and prototype learning to provide meaningful, coherent, and relevant prototypical parts needed for accurate and interpretable image classification tasks. We evaluated the Proto-BagNet for drusen detection on publicly available retinal OCT data. The Proto-BagNet performed comparably to the state-of-the-art interpretable and non-interpretable models while providing faithful, accurate, and clinically meaningful local and global explanations. The code is available at https://github.com/kdjoumessi/Proto-BagNets.
Related papers
- Sparse Prototype Network for Explainable Pedestrian Behavior Prediction [60.80524827122901]
We present Sparse Prototype Network (SPN), an explainable method designed to simultaneously predict a pedestrian's future action, trajectory, and pose.
Regularized by mono-semanticity and clustering constraints, the prototypes learn consistent and human-understandable features.
arXiv Detail & Related papers (2024-10-16T03:33:40Z) - This Looks Better than That: Better Interpretable Models with ProtoPNeXt [14.28283868577614]
Prototypical-part models are a popular interpretable alternative to black-box deep learning models for computer vision.
We create a new framework for integrating components of prototypical-part models -- ProtoPNeXt.
arXiv Detail & Related papers (2024-06-20T18:54:27Z) - ProtoS-ViT: Visual foundation models for sparse self-explainable classifications [0.6249768559720122]
This work demonstrates how frozen pre-trained ViT backbones can be effectively turned into prototypical models.
ProtoS-ViT surpasses existing prototypical models showing strong performance in terms of accuracy, compactness, and explainability.
arXiv Detail & Related papers (2024-06-14T13:36:30Z) - Part-Based Models Improve Adversarial Robustness [57.699029966800644]
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks.
Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts.
Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations.
arXiv Detail & Related papers (2022-09-15T15:41:47Z) - Concept-level Debugging of Part-Prototype Networks [27.700043183428807]
Part-prototype Networks (ProtoPNets) are concept-based classifiers designed to achieve the same performance as black-box models without compromising transparency.
We propose a concept-level debugger for ProtoPNets in which a human supervisor supplies feedback in the form of what part-prototypes must be forgotten or kept.
arXiv Detail & Related papers (2022-05-31T13:18:51Z) - Rethinking Semantic Segmentation: A Prototype View [126.59244185849838]
We present a nonparametric semantic segmentation model based on non-learnable prototypes.
Our framework yields compelling results over several datasets.
We expect this work will provoke a rethink of the current de facto semantic segmentation model design.
arXiv Detail & Related papers (2022-03-28T21:15:32Z) - Explain, Edit, and Understand: Rethinking User Study Design for
Evaluating Model Explanations [97.91630330328815]
We conduct a crowdsourcing study, where participants interact with deception detection models that have been trained to distinguish between genuine and fake hotel reviews.
We observe that for a linear bag-of-words model, participants with access to the feature coefficients during training are able to cause a larger reduction in model confidence in the testing phase when compared to the no-explanation control.
arXiv Detail & Related papers (2021-12-17T18:29:56Z) - Deformable ProtoPNet: An Interpretable Image Classifier Using Deformable Prototypes [7.8515366468594765]
We present a deformable part network (Deformable ProtoPNet) that integrates the power of deep learning and the interpretability of case-based reasoning.
This model classifies input images by comparing them with prototypes learned during training, yielding explanations in the form of "this looks like that"
arXiv Detail & Related papers (2021-11-29T22:38:13Z) - Explanation-Guided Training for Cross-Domain Few-Shot Classification [96.12873073444091]
Cross-domain few-shot classification task (CD-FSC) combines few-shot classification with the requirement to generalize across domains represented by datasets.
We introduce a novel training approach for existing FSC models.
We show that explanation-guided training effectively improves the model generalization.
arXiv Detail & Related papers (2020-07-17T07:28:08Z) - ProtoryNet - Interpretable Text Classification Via Prototype
Trajectories [4.768286204382179]
We propose a novel interpretable deep neural network for text classification, called ProtoryNet.
ProtoryNet makes a prediction by finding the most similar prototype for each sentence in a text sequence.
After prototype pruning, the resulting ProtoryNet models only need less than or around 20 prototypes for all datasets.
arXiv Detail & Related papers (2020-07-03T16:00:26Z) - LFD-ProtoNet: Prototypical Network Based on Local Fisher Discriminant
Analysis for Few-shot Learning [98.64231310584614]
The prototypical network (ProtoNet) is a few-shot learning framework that performs metric learning and classification using the distance to prototype representations of each class.
We show the usefulness of the proposed method by theoretically providing an expected risk bound and empirically demonstrating its superior classification accuracy on miniImageNet and tieredImageNet.
arXiv Detail & Related papers (2020-06-15T11:56:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.