Fidelity of Ensemble Aggregation for Saliency Map Explanations using
Bayesian Optimization Techniques
- URL: http://arxiv.org/abs/2207.01565v2
- Date: Tue, 5 Jul 2022 06:42:09 GMT
- Title: Fidelity of Ensemble Aggregation for Saliency Map Explanations using
Bayesian Optimization Techniques
- Authors: Yannik Mahlau, Christian Nolde
- Abstract summary: We present and compare different pixel-based aggregation schemes with the goal of generating a new explanation.
We incorporate the variance between the individual explanations into the aggregation process.
We also analyze the effect of multiple normalization techniques on ensemble aggregation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, an abundance of feature attribution methods for explaining
neural networks have been developed. Especially in the field of computer
vision, many methods for generating saliency maps providing pixel attributions
exist. However, their explanations often contradict each other and it is not
clear which explanation to trust. A natural solution to this problem is the
aggregation of multiple explanations. We present and compare different
pixel-based aggregation schemes with the goal of generating a new explanation,
whose fidelity to the model's decision is higher than each individual
explanation. Using methods from the field of Bayesian Optimization, we
incorporate the variance between the individual explanations into the
aggregation process. Additionally, we analyze the effect of multiple
normalization techniques on ensemble aggregation.
Related papers
- Interpretable Network Visualizations: A Human-in-the-Loop Approach for Post-hoc Explainability of CNN-based Image Classification [5.087579454836169]
State-of-the-art explainability methods generate saliency maps to show where a specific class is identified.
We introduce a post-hoc method that explains the entire feature extraction process of a Convolutional Neural Network.
We also show an approach to generate global explanations by aggregating labels across multiple images.
arXiv Detail & Related papers (2024-05-06T09:21:35Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Local Universal Explainer (LUX) -- a rule-based explainer with factual, counterfactual and visual explanations [7.673339435080445]
Local Universal Explainer (LUX) is a rule-based explainer that can generate factual, counterfactual and visual explanations.
It is based on a modified version of decision tree algorithms that allows for oblique splits and integration with feature importance XAI methods such as SHAP.
We tested our method on real and synthetic datasets and compared it with state-of-the-art rule-based explainers such as LORE, EXPLAN and Anchor.
arXiv Detail & Related papers (2023-10-23T13:04:15Z) - Disentangled Explanations of Neural Network Predictions by Finding Relevant Subspaces [14.70409833767752]
Explainable AI aims to overcome the black-box nature of complex ML models like neural networks by generating explanations for their predictions.
We propose two new analyses, extending principles found in PCA or ICA to explanations.
These novel analyses, which we call principal relevant component analysis (PRCA) and disentangled relevant subspace analysis (DRSA), maximize relevance instead of e.g. variance or kurtosis.
arXiv Detail & Related papers (2022-12-30T18:04:25Z) - Comparing the Decision-Making Mechanisms by Transformers and CNNs via Explanation Methods [4.661764541283174]
We study the decision-making of different visual recognition backbones by applying deep explanation algorithms on a dataset-wide basis.
We find that Transformers and ConvNeXt are found to be more compositional, in the sense that they jointly consider multiple parts of the image in building their decisions.
We plot a landscape of different models based on their feature-use similarity.
arXiv Detail & Related papers (2022-12-13T19:38:13Z) - Shap-CAM: Visual Explanations for Convolutional Neural Networks based on
Shapley Value [86.69600830581912]
We develop a novel visual explanation method called Shap-CAM based on class activation mapping.
We demonstrate that Shap-CAM achieves better visual performance and fairness for interpreting the decision making process.
arXiv Detail & Related papers (2022-08-07T00:59:23Z) - Multivariate Data Explanation by Jumping Emerging Patterns Visualization [78.6363825307044]
We present VAX (multiVariate dAta eXplanation), a new VA method to support the identification and visual interpretation of patterns in multivariate data sets.
Unlike the existing similar approaches, VAX uses the concept of Jumping Emerging Patterns to identify and aggregate several diversified patterns, producing explanations through logic combinations of data variables.
arXiv Detail & Related papers (2021-06-21T13:49:44Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Model Fusion with Kullback--Leibler Divergence [58.20269014662046]
We propose a method to fuse posterior distributions learned from heterogeneous datasets.
Our algorithm relies on a mean field assumption for both the fused model and the individual dataset posteriors.
arXiv Detail & Related papers (2020-07-13T03:27:45Z) - Multi-Objective Counterfactual Explanations [0.7349727826230864]
We propose the Multi-Objective Counterfactuals (MOC) method, which translates the counterfactual search into a multi-objective optimization problem.
Our approach not only returns a diverse set of counterfactuals with different trade-offs between the proposed objectives, but also maintains diversity in feature space.
arXiv Detail & Related papers (2020-04-23T13:56:39Z) - Generalizing Convolutional Neural Networks for Equivariance to Lie
Groups on Arbitrary Continuous Data [52.78581260260455]
We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group.
We apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems.
arXiv Detail & Related papers (2020-02-25T17:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.