Now you see me! Attribution Distributions Reveal What is Truly Important for a Prediction
- URL: http://arxiv.org/abs/2503.07346v2
- Date: Mon, 27 Oct 2025 17:45:30 GMT
- Title: Now you see me! Attribution Distributions Reveal What is Truly Important for a Prediction
- Authors: Nils Philipp Walter, Jilles Vreeken, Jonas Fischer,
- Abstract summary: Attribution methods have been developed to gain understanding into which input features neural networks use for a specific prediction.<n>Here, we identify one cause for the lack of specificity in attributions as the computation of attribution of isolated logits.<n>By computing probability of attributions over classes for each spatial location in the image, we unleash the true capabilities of existing attribution methods.
- Score: 40.04908502564302
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Neural networks are regularly employed in high-stakes decision-making, where understanding and transparency is key. Attribution methods have been developed to gain understanding into which input features neural networks use for a specific prediction. Although widely used in computer vision, these methods often result in unspecific saliency maps that fail to identify the relevant information that led to a decision, supported by different benchmarks results. Here, we revisit the common attribution pipeline and identify one cause for the lack of specificity in attributions as the computation of attribution of isolated logits. Instead, we suggest to combine attributions of multiple class logits in analogy to how the softmax combines the information across logits. By computing probability distributions of attributions over classes for each spatial location in the image, we unleash the true capabilities of existing attribution methods, revealing better object- and instance-specificity and uncovering discriminative as well as shared features between classes. On common benchmarks, including the grid-pointing game and randomization-based sanity checks, we show that this reconsideration of how and where we compute attributions across the network improves established attribution methods while staying agnostic to model architectures. We make the code publicly available: https://github.com/nilspwalter/var.
Related papers
- SMOL-MapSeg: Show Me One Label [0.4499833362998489]
We show that SMOL-MapSeg can accurately segment classes defined by OND knowledge.<n>It can also adapt to unseen classes through few-shot fine-tuning.<n>It outperforms a UNet-based baseline in average segmentation performance.
arXiv Detail & Related papers (2025-08-07T15:36:17Z) - Aggregating Local Saliency Maps for Semi-Global Explainable Image Classification [0.0]
Deep learning dominates image classification tasks, yet understanding how models arrive at predictions remains a challenge.<n>Much research focuses on local explanations of individual predictions, such as saliency maps, which visualise the influence of specific pixels on a model's prediction.<n>We propose Segment Attribution Tables (SATs), a method for summarising local saliency explanations into (semi-)global insights.
arXiv Detail & Related papers (2025-06-29T14:11:02Z) - Visual-TCAV: Concept-based Attribution and Saliency Maps for Post-hoc Explainability in Image Classification [3.9626211140865464]
Convolutional Neural Networks (CNNs) have seen significant performance improvements in recent years.
However, due to their size and complexity, they function as black-boxes, leading to transparency concerns.
This paper introduces a novel post-hoc explainability framework, Visual-TCAV, which aims to bridge the gap between these methods.
arXiv Detail & Related papers (2024-11-08T16:52:52Z) - Attri-Net: A Globally and Locally Inherently Interpretable Model for Multi-Label Classification Using Class-Specific Counterfactuals [4.384272169863716]
Interpretability is crucial for machine learning algorithms in high-stakes medical applications.
Attri-Net is an inherently interpretable model for multi-label classification that provides local and global explanations.
arXiv Detail & Related papers (2024-06-08T13:52:02Z) - Interpretable Network Visualizations: A Human-in-the-Loop Approach for Post-hoc Explainability of CNN-based Image Classification [5.087579454836169]
State-of-the-art explainability methods generate saliency maps to show where a specific class is identified.
We introduce a post-hoc method that explains the entire feature extraction process of a Convolutional Neural Network.
We also show an approach to generate global explanations by aggregating labels across multiple images.
arXiv Detail & Related papers (2024-05-06T09:21:35Z) - A Fixed-Point Approach to Unified Prompt-Based Counting [51.20608895374113]
This paper aims to establish a comprehensive prompt-based counting framework capable of generating density maps for objects indicated by various prompt types, such as box, point, and text.
Our model excels in prominent class-agnostic datasets and exhibits superior performance in cross-dataset adaptation tasks.
arXiv Detail & Related papers (2024-03-15T12:05:44Z) - Mapping High-level Semantic Regions in Indoor Environments without
Object Recognition [50.624970503498226]
The present work proposes a method for semantic region mapping via embodied navigation in indoor environments.
To enable region identification, the method uses a vision-to-language model to provide scene information for mapping.
By projecting egocentric scene understanding into the global frame, the proposed method generates a semantic map as a distribution over possible region labels at each location.
arXiv Detail & Related papers (2024-03-11T18:09:50Z) - GNN-LoFI: a Novel Graph Neural Network through Localized Feature-based
Histogram Intersection [51.608147732998994]
Graph neural networks are increasingly becoming the framework of choice for graph-based machine learning.
We propose a new graph neural network architecture that substitutes classical message passing with an analysis of the local distribution of node features.
arXiv Detail & Related papers (2024-01-17T13:04:23Z) - On the verification of Embeddings using Hybrid Markov Logic [2.113770213797994]
We propose a framework to verify complex properties of a learned representation.
We present an approach to learn parameters for the properties within this framework.
We illustrate verification in Graph Neural Networks, Deep Knowledge Tracing and Intelligent Tutoring Systems.
arXiv Detail & Related papers (2023-12-13T17:04:09Z) - Neural Map Prior for Autonomous Driving [17.198729798817094]
High-definition (HD) semantic maps are crucial in enabling autonomous vehicles to navigate urban environments.
Traditional method of creating offline HD maps involves labor-intensive manual annotation processes.
Recent studies have proposed an alternative approach that generates local maps using online sensor observations.
In this study, we propose Neural Map Prior (NMP), a neural representation of global maps.
arXiv Detail & Related papers (2023-04-17T17:58:40Z) - An Upper Bound for the Distribution Overlap Index and Its Applications [22.92968284023414]
This paper proposes an easy-to-compute upper bound for the overlap index between two probability distributions.<n>The proposed bound shows its value in one-class classification and domain shift analysis.<n>Our work shows significant promise toward broadening the applications of overlap-based metrics.
arXiv Detail & Related papers (2022-12-16T20:02:03Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - LEAD: Self-Supervised Landmark Estimation by Aligning Distributions of
Feature Similarity [49.84167231111667]
Existing works in self-supervised landmark detection are based on learning dense (pixel-level) feature representations from an image.
We introduce an approach to enhance the learning of dense equivariant representations in a self-supervised fashion.
We show that having such a prior in the feature extractor helps in landmark detection, even under drastically limited number of annotations.
arXiv Detail & Related papers (2022-04-06T17:48:18Z) - ADVISE: ADaptive Feature Relevance and VISual Explanations for
Convolutional Neural Networks [0.745554610293091]
We introduce ADVISE, a new explainability method that quantifies and leverages the relevance of each unit of the feature map to provide better visual explanations.
We extensively evaluate our idea in the image classification task using AlexNet, VGG16, ResNet50, and Xception pretrained on ImageNet.
Our experiments further show that ADVISE fulfils the sensitivity and implementation independence axioms while passing the sanity checks.
arXiv Detail & Related papers (2022-03-02T18:16:57Z) - Fine-Grained Neural Network Explanation by Identifying Input Features
with Predictive Information [53.28701922632817]
We propose a method to identify features with predictive information in the input domain.
The core idea of our method is leveraging a bottleneck on the input that only lets input features associated with predictive latent features pass through.
arXiv Detail & Related papers (2021-10-04T14:13:42Z) - Conditional Variational Capsule Network for Open Set Recognition [64.18600886936557]
In open set recognition, a classifier has to detect unknown classes that are not known at training time.
Recently proposed Capsule Networks have shown to outperform alternatives in many fields, particularly in image recognition.
In our proposal, during training, capsules features of the same known class are encouraged to match a pre-defined gaussian, one for each class.
arXiv Detail & Related papers (2021-04-19T09:39:30Z) - Graph Sampling Based Deep Metric Learning for Generalizable Person
Re-Identification [114.56752624945142]
We argue that the most popular random sampling method, the well-known PK sampler, is not informative and efficient for deep metric learning.
We propose an efficient mini batch sampling method called Graph Sampling (GS) for large-scale metric learning.
arXiv Detail & Related papers (2021-04-04T06:44:15Z) - Visualization of Supervised and Self-Supervised Neural Networks via
Attribution Guided Factorization [87.96102461221415]
We develop an algorithm that provides per-class explainability.
In an extensive battery of experiments, we demonstrate the ability of our methods to class-specific visualization.
arXiv Detail & Related papers (2020-12-03T18:48:39Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z) - Region Comparison Network for Interpretable Few-shot Image
Classification [97.97902360117368]
Few-shot image classification has been proposed to effectively use only a limited number of labeled examples to train models for new classes.
We propose a metric learning based method named Region Comparison Network (RCN), which is able to reveal how few-shot learning works.
We also present a new way to generalize the interpretability from the level of tasks to categories.
arXiv Detail & Related papers (2020-09-08T07:29:05Z) - One-vs-Rest Network-based Deep Probability Model for Open Set
Recognition [6.85316573653194]
An intelligent self-learning system should be able to differentiate between known and unknown examples.
One-vs-rest networks can provide more informative hidden representations for unknown examples than the commonly used SoftMax layer.
The proposed probability model outperformed the state-of-the art methods in open set classification scenarios.
arXiv Detail & Related papers (2020-04-17T05:24:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.