Enhancing Explainable AI: A Hybrid Approach Combining GradCAM and LRP for CNN Interpretability
- URL: http://arxiv.org/abs/2405.12175v1
- Date: Mon, 20 May 2024 16:58:24 GMT
- Title: Enhancing Explainable AI: A Hybrid Approach Combining GradCAM and LRP for CNN Interpretability
- Authors: Vaibhav Dhore, Achintya Bhat, Viraj Nerlekar, Kashyap Chavhan, Aniket Umare,
- Abstract summary: We present a new technique that explains the output of a CNN-based model using a combination of GradCAM and LRP methods.
Both of these methods produce visual explanations by highlighting input regions that are important for predictions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a new technique that explains the output of a CNN-based model using a combination of GradCAM and LRP methods. Both of these methods produce visual explanations by highlighting input regions that are important for predictions. In the new method, the explanation produced by GradCAM is first processed to remove noises. The processed output is then multiplied elementwise with the output of LRP. Finally, a Gaussian blur is applied on the product. We compared the proposed method with GradCAM and LRP on the metrics of Faithfulness, Robustness, Complexity, Localisation and Randomisation. It was observed that this method performs better on Complexity than both GradCAM and LRP and is better than atleast one of them in the other metrics.
Related papers
- GaussianSR: High Fidelity 2D Gaussian Splatting for Arbitrary-Scale Image Super-Resolution [29.49617080140511]
Implicit neural representations (INRs) have significantly advanced the field of arbitrary-scale super-resolution (ASSR) of images.
Most existing INR-based ASSR networks first extract features from the given low-resolution image using an encoder, and then render the super-resolved result via a multi-layer perceptron decoder.
We propose a novel ASSR method named GaussianSR that overcomes this limitation through 2D Gaussian Splatting (2DGS)
arXiv Detail & Related papers (2024-07-25T13:53:48Z) - Exploring Equation as a Better Intermediate Meaning Representation for
Numerical Reasoning [53.2491163874712]
We use equations as IMRs to solve the numerical reasoning task.
We present a method called Boosting Numerical Reasontextbfing by Decomposing the Generation of Equations (Bridge)
Our method improves the performance by 2.2%, 0.9%, and 1.7% on GSM8K, SVAMP, and Algebra datasets.
arXiv Detail & Related papers (2023-08-21T09:35:33Z) - Improving Pixel-based MIM by Reducing Wasted Modeling Capability [77.99468514275185]
We propose a new method that explicitly utilizes low-level features from shallow layers to aid pixel reconstruction.
To the best of our knowledge, we are the first to systematically investigate multi-level feature fusion for isotropic architectures.
Our method yields significant performance gains, such as 1.2% on fine-tuning, 2.8% on linear probing, and 2.6% on semantic segmentation.
arXiv Detail & Related papers (2023-08-01T03:44:56Z) - Interpretable Machine Learning based on Functional ANOVA Framework:
Algorithms and Comparisons [9.10422407200807]
In the early days of machine learning (ML), the emphasis was on developing complex algorithms to achieve best predictive performance.
Recently, researchers are compromising on small increases in predictive performance to develop algorithms that are inherently interpretable.
The paper proposes a new algorithm, called GAMI-Lin-T, that also uses trees like EBM, but it does linear fits instead of piecewise constants within the partitions.
arXiv Detail & Related papers (2023-05-25T02:40:52Z) - Empowering CAM-Based Methods with Capability to Generate Fine-Grained
and High-Faithfulness Explanations [1.757194730633422]
We propose FG-CAM, which extends CAM-based methods to enable generating fine-grained and high-faithfulness explanations.
Our method not only solves the shortcoming of CAM-based methods without changing their characteristics, but also generates fine-grained explanations that have higher faithfulness than LRP and its variants.
arXiv Detail & Related papers (2023-03-16T09:29:05Z) - Denoising Generalized Expectation-Consistent Approximation for MRI Image
Recovery [19.497777961872448]
In inverse problems, plug-and-play (DNN) methods have been developed that replace the step in a convex optimization with a call to an application-specific denoiser, often implemented using a deep neural network (DNN)
Although such methods have been successful, they can be improved. For example, denoisers are usually designed/trained to remove white noise, but the neural denoiser input error is far from white or Gaussian.
In this paper, we propose an algorithm that offers predictable error statistics each iteration, as well as a new image denoiser that leverages those statistics.
arXiv Detail & Related papers (2022-06-09T00:58:44Z) - Graph Signal Restoration Using Nested Deep Algorithm Unrolling [85.53158261016331]
Graph signal processing is a ubiquitous task in many applications such as sensor, social transportation brain networks, point cloud processing, and graph networks.
We propose two restoration methods based on convexindependent deep ADMM (ADMM)
parameters in the proposed restoration methods are trainable in an end-to-end manner.
arXiv Detail & Related papers (2021-06-30T08:57:01Z) - Investigating Methods to Improve Language Model Integration for
Attention-based Encoder-Decoder ASR Models [107.86965028729517]
Attention-based encoder-decoder (AED) models learn an implicit internal language model (ILM) from the training transcriptions.
We propose several novel methods to estimate the ILM directly from the AED model.
arXiv Detail & Related papers (2021-04-12T15:16:03Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z) - Use HiResCAM instead of Grad-CAM for faithful explanations of
convolutional neural networks [89.56292219019163]
Explanation methods facilitate the development of models that learn meaningful concepts and avoid exploiting spurious correlations.
We illustrate a previously unrecognized limitation of the popular neural network explanation method Grad-CAM.
We propose HiResCAM, a class-specific explanation method that is guaranteed to highlight only the locations the model used to make each prediction.
arXiv Detail & Related papers (2020-11-17T19:26:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.