LIFT-CAM: Towards Better Explanations for Class Activation Mapping
- URL: http://arxiv.org/abs/2102.05228v1
- Date: Wed, 10 Feb 2021 02:43:50 GMT
- Title: LIFT-CAM: Towards Better Explanations for Class Activation Mapping
- Authors: Hyungsik Jung and Youngrock Oh
- Abstract summary: Class activation mapping (CAM) based methods generate visual explanation maps by a linear combination of activation maps from CNNs.
We introduce an efficient approximation method, referred to as LIFT-CAM.
It achieves better performances than the other previous CAM-based methods in qualitative and quantitative aspects.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Increasing demands for understanding the internal behaviors of convolutional
neural networks (CNNs) have led to remarkable improvements in explanation
methods. Particularly, several class activation mapping (CAM) based methods,
which generate visual explanation maps by a linear combination of activation
maps from CNNs, have been proposed. However, the majority of the methods lack a
theoretical basis in how to assign their weighted linear coefficients. In this
paper, we revisit the intrinsic linearity of CAM w.r.t. the activation maps.
Focusing on the linearity, we construct an explanation model as a linear
function of binary variables which denote the existence of the corresponding
activation maps. With this approach, the explanation model can be determined by
the class of additive feature attribution methods which adopts SHAP values as a
unified measure of feature importance. We then demonstrate the efficacy of the
SHAP values as the weight coefficients for CAM. However, the exact SHAP values
are incalculable. Hence, we introduce an efficient approximation method,
referred to as LIFT-CAM. On the basis of DeepLIFT, our proposed method can
estimate the true SHAP values quickly and accurately. Furthermore, it achieves
better performances than the other previous CAM-based methods in qualitative
and quantitative aspects.
Related papers
- Faithful Explanations of Black-box NLP Models Using LLM-generated
Counterfactuals [67.64770842323966]
Causal explanations of predictions of NLP systems are essential to ensure safety and establish trust.
Existing methods often fall short of explaining model predictions effectively or efficiently.
We propose two approaches for counterfactual (CF) approximation.
arXiv Detail & Related papers (2023-10-01T07:31:04Z) - COSE: A Consistency-Sensitivity Metric for Saliency on Image
Classification [21.3855970055692]
We present a set of metrics that utilize vision priors to assess the performance of saliency methods on image classification tasks.
We show that although saliency methods are thought to be architecture-independent, most methods could better explain transformer-based models over convolutional-based models.
arXiv Detail & Related papers (2023-09-20T01:06:44Z) - Generalized Low-Rank Update: Model Parameter Bounds for Low-Rank
Training Data Modifications [16.822770693792823]
We have developed an incremental machine learning (ML) method that efficiently obtains the optimal model when a small number of instances or features are added or removed.
This problem holds practical importance in model selection, such as cross-validation (CV) and feature selection.
We introduce a method called the Generalized Low-Rank Update (GLRU) which extends the low-rank update framework of linear estimators to ML methods formulated as a certain class of regularized empirical risk minimization.
arXiv Detail & Related papers (2023-06-22T05:00:11Z) - Opti-CAM: Optimizing saliency maps for interpretability [10.122899813335694]
We introduce Opti-CAM, combining ideas from CAM-based and masking-based approaches.
Our saliency map is a linear combination of feature maps, where weights are optimized per image.
On several datasets, Opti-CAM largely outperforms other CAM-based approaches according to the most relevant classification metrics.
arXiv Detail & Related papers (2023-01-17T16:44:48Z) - Abs-CAM: A Gradient Optimization Interpretable Approach for Explanation
of Convolutional Neural Networks [7.71412567705588]
Class activation mapping-based method has been widely used to interpret the internal decisions of models in computer vision tasks.
We propose an Absolute value Class Activation Mapping-based (Abs-CAM) method, which optimize the gradients derived from the backpropagation.
The framework of Abs-CAM is divided into two phases: generating initial saliency map and generating final saliency map.
arXiv Detail & Related papers (2022-07-08T02:06:46Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Revisiting The Evaluation of Class Activation Mapping for
Explainability: A Novel Metric and Experimental Analysis [54.94682858474711]
Class Activation Mapping (CAM) approaches provide an effective visualization by taking weighted averages of the activation maps.
We propose a novel set of metrics to quantify explanation maps, which show better effectiveness and simplify comparisons between approaches.
arXiv Detail & Related papers (2021-04-20T21:34:24Z) - Reintroducing Straight-Through Estimators as Principled Methods for
Stochastic Binary Networks [85.94999581306827]
Training neural networks with binary weights and activations is a challenging problem due to the lack of gradients and difficulty of optimization over discrete weights.
Many successful experimental results have been achieved with empirical straight-through (ST) approaches.
At the same time, ST methods can be truly derived as estimators in the binary network (SBN) model with Bernoulli weights.
arXiv Detail & Related papers (2020-06-11T23:58:18Z) - Interpolation Technique to Speed Up Gradients Propagation in Neural ODEs [71.26657499537366]
We propose a simple literature-based method for the efficient approximation of gradients in neural ODE models.
We compare it with the reverse dynamic method to train neural ODEs on classification, density estimation, and inference approximation tasks.
arXiv Detail & Related papers (2020-03-11T13:15:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.