Interpretability Benchmark for Evaluating Spatial Misalignment of
Prototypical Parts Explanations
- URL: http://arxiv.org/abs/2308.08162v1
- Date: Wed, 16 Aug 2023 06:09:51 GMT
- Title: Interpretability Benchmark for Evaluating Spatial Misalignment of
Prototypical Parts Explanations
- Authors: Miko{\l}aj Sacha, Bartosz Jura, Dawid Rymarczyk, {\L}ukasz Struski,
Jacek Tabor, Bartosz Zieli\'nski
- Abstract summary: We name this undesired behavior a spatial explanation misalignment.
We propose a method for misalignment compensation and apply it to existing state-of-the-art models.
- Score: 13.111196926104485
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prototypical parts-based networks are becoming increasingly popular due to
their faithful self-explanations. However, their similarity maps are calculated
in the penultimate network layer. Therefore, the receptive field of the
prototype activation region often depends on parts of the image outside this
region, which can lead to misleading interpretations. We name this undesired
behavior a spatial explanation misalignment and introduce an interpretability
benchmark with a set of dedicated metrics for quantifying this phenomenon. In
addition, we propose a method for misalignment compensation and apply it to
existing state-of-the-art models. We show the expressiveness of our benchmark
and the effectiveness of the proposed compensation methodology through
extensive empirical studies.
Related papers
- Pitfalls of topology-aware image segmentation [81.19923502845441]
We identify critical pitfalls in model evaluation that include inadequate connectivity choices, overlooked topological artifacts, and inappropriate use of evaluation metrics.
We propose a set of actionable recommendations to establish fair and robust evaluation standards for topology-aware medical image segmentation methods.
arXiv Detail & Related papers (2024-12-19T08:11:42Z) - Decom--CAM: Tell Me What You See, In Details! Feature-Level Interpretation via Decomposition Class Activation Map [23.71680014689873]
Class Activation Map (CAM) is widely used to interpret deep model predictions by highlighting object location.
This paper proposes a new two-stage interpretability method called the Decomposition Class Activation Map (Decom-CAM)
Our experiments demonstrate that the proposed Decom-CAM outperforms current state-of-the-art methods significantly.
arXiv Detail & Related papers (2023-05-27T14:33:01Z) - Supervised Contrastive Learning with Heterogeneous Similarity for
Distribution Shifts [3.7819322027528113]
We propose a new regularization using the supervised contrastive learning to prevent such overfitting and to train models that do not degrade their performance under the distribution shifts.
Experiments on benchmark datasets that emulate distribution shifts, including subpopulation shift and domain generalization, demonstrate the advantage of the proposed method.
arXiv Detail & Related papers (2023-04-07T01:45:09Z) - Unsupervised Interpretable Basis Extraction for Concept-Based Visual
Explanations [53.973055975918655]
We show that, intermediate layer representations become more interpretable when transformed to the bases extracted with our method.
We compare the bases extracted with our method with the bases derived with a supervised approach and find that, in one aspect, the proposed unsupervised approach has a strength that constitutes a limitation of the supervised one and give potential directions for future research.
arXiv Detail & Related papers (2023-03-19T00:37:19Z) - PNI : Industrial Anomaly Detection using Position and Neighborhood
Information [6.316693022958221]
We propose a new algorithm, textbfPNI, which estimates the normal distribution using conditional probability given neighborhood features.
We conducted experiments on the MVTec AD benchmark dataset and achieved state-of-the-art performance, with textbf99.56% and textbf98.98% AUROC scores in anomaly detection and localization.
arXiv Detail & Related papers (2022-11-22T23:45:27Z) - Self-Supervised Training with Autoencoders for Visual Anomaly Detection [61.62861063776813]
We focus on a specific use case in anomaly detection where the distribution of normal samples is supported by a lower-dimensional manifold.
We adapt a self-supervised learning regime that exploits discriminative information during training but focuses on the submanifold of normal examples.
We achieve a new state-of-the-art result on the MVTec AD dataset -- a challenging benchmark for visual anomaly detection in the manufacturing domain.
arXiv Detail & Related papers (2022-06-23T14:16:30Z) - Deconfounding Scores: Feature Representations for Causal Effect
Estimation with Weak Overlap [140.98628848491146]
We introduce deconfounding scores, which induce better overlap without biasing the target of estimation.
We show that deconfounding scores satisfy a zero-covariance condition that is identifiable in observed data.
In particular, we show that this technique could be an attractive alternative to standard regularizations.
arXiv Detail & Related papers (2021-04-12T18:50:11Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z) - Explainable Deep Classification Models for Domain Generalization [94.43131722655617]
Explanations are defined as regions of visual evidence upon which a deep classification network makes a decision.
Our training strategy enforces a periodic saliency-based feedback to encourage the model to focus on the image regions that directly correspond to the ground-truth object.
arXiv Detail & Related papers (2020-03-13T22:22:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.