Ablation Path Saliency
- URL: http://arxiv.org/abs/2209.12459v2
- Date: Fri, 28 Apr 2023 16:39:55 GMT
- Title: Ablation Path Saliency
- Authors: Justus Sagem\"uller, Olivier Verdier
- Abstract summary: Several types of saliency methods have been proposed for explaining black-box classification.
We observe however that several of these methods can be seen as edge cases of a single, more general procedure.
We demonstrate furthermore that ablation paths can be directly used as a technique of its own right.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Various types of saliency methods have been proposed for explaining black-box
classification. In image applications, this means highlighting the part of the
image that is most relevant for the current decision. Unfortunately, the
different methods may disagree and it can be hard to quantify how
representative and faithful the explanation really is. We observe however that
several of these methods can be seen as edge cases of a single, more general
procedure based on finding a particular path through the classifier's domain.
This offers additional geometric interpretation to the existing methods. We
demonstrate furthermore that ablation paths can be directly used as a technique
of its own right. This is able to compete with literature methods on existing
benchmarks, while giving more fine-grained information and better opportunities
for validation of the explanations' faithfulness.
Related papers
- Rethinking Distance Metrics for Counterfactual Explainability [53.436414009687]
We investigate a framing for counterfactual generation methods that considers counterfactuals not as independent draws from a region around the reference, but as jointly sampled with the reference from the underlying data distribution.
We derive a distance metric, tailored for counterfactual similarity that can be applied to a broad range of settings.
arXiv Detail & Related papers (2024-10-18T15:06:50Z) - Classification Metrics for Image Explanations: Towards Building Reliable XAI-Evaluations [0.24578723416255752]
Saliency methods provide (super-)pixelwise feature attribution scores for input images.
New evaluation metrics for saliency methods are developed and common saliency methods are benchmarked on ImageNet.
A scheme for reliability evaluation of such metrics is proposed that is based on concepts from psychometric testing.
arXiv Detail & Related papers (2024-06-07T16:37:50Z) - Recent Advances in Scene Image Representation and Classification [1.8369974607582584]
We review the existing scene image representation methods that are being used widely for image classification.
We compare their performance both qualitatively (e.g., quality of outputs, pros/cons, etc.) and quantitatively (e.g., accuracy)
Overall, this survey provides in-depth insights and applications of recent scene image representation methods for traditional Computer Vision (CV)-based methods, Deep Learning (DL)-based methods, and Search Engine (SE)-based methods.
arXiv Detail & Related papers (2022-06-15T07:12:23Z) - What You See is What You Classify: Black Box Attributions [61.998683569022006]
We train a deep network, the Explainer, to predict attributions for a pre-trained black-box classifier, the Explanandum.
Unlike most existing approaches, ours is capable of directly generating very distinct class-specific masks.
We show that our attributions are superior to established methods both visually and quantitatively.
arXiv Detail & Related papers (2022-05-23T12:30:04Z) - Disentangling A Single MR Modality [15.801648254480487]
We present a novel framework that learns theoretically and practically superior disentanglement from single modality magnetic resonance images.
We propose a new information-based metric to quantitatively evaluate disentanglement.
arXiv Detail & Related papers (2022-05-10T15:40:12Z) - Instance Similarity Learning for Unsupervised Feature Representation [83.31011038813459]
We propose an instance similarity learning (ISL) method for unsupervised feature representation.
We employ the Generative Adversarial Networks (GAN) to mine the underlying feature manifold.
Experiments on image classification demonstrate the superiority of our method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2021-08-05T16:42:06Z) - Combining Similarity and Adversarial Learning to Generate Visual
Explanation: Application to Medical Image Classification [0.0]
We leverage a learning framework to produce our visual explanations method.
Using metrics from the literature, our method outperforms state-of-the-art approaches.
We validate our approach on a large chest X-ray database.
arXiv Detail & Related papers (2020-12-14T08:34:12Z) - Visualization of Supervised and Self-Supervised Neural Networks via
Attribution Guided Factorization [87.96102461221415]
We develop an algorithm that provides per-class explainability.
In an extensive battery of experiments, we demonstrate the ability of our methods to class-specific visualization.
arXiv Detail & Related papers (2020-12-03T18:48:39Z) - Evaluating the Disentanglement of Deep Generative Models through
Manifold Topology [66.06153115971732]
We present a method for quantifying disentanglement that only uses the generative model.
We empirically evaluate several state-of-the-art models across multiple datasets.
arXiv Detail & Related papers (2020-06-05T20:54:11Z) - There and Back Again: Revisiting Backpropagation Saliency Methods [87.40330595283969]
Saliency methods seek to explain the predictions of a model by producing an importance map across each input sample.
A popular class of such methods is based on backpropagating a signal and analyzing the resulting gradient.
We propose a single framework under which several such methods can be unified.
arXiv Detail & Related papers (2020-04-06T17:58:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.