Explaining Deep Learning Models for Structured Data using Layer-Wise
Relevance Propagation
- URL: http://arxiv.org/abs/2011.13429v1
- Date: Thu, 26 Nov 2020 18:34:21 GMT
- Title: Explaining Deep Learning Models for Structured Data using Layer-Wise
Relevance Propagation
- Authors: hsan Ullah, Andre Rios, Vaibhav Gala and Susan Mckeever
- Abstract summary: Layer-wise Relevance (LRP), an established explainability technique developed for deep models in computer vision, provides intuitive human-readable heat maps of input images.
We show how LRPis more effective than traditional explainability concepts of Local Interpretable Model-agnostic Ex-planations (LIME) and Shapley Additive Explanations (SHAP) for explainability.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Trust and credibility in machine learning models is bolstered by the ability
of a model to explain itsdecisions. While explainability of deep learning
models is a well-known challenge, a further chal-lenge is clarity of the
explanation itself, which must be interpreted by downstream users.
Layer-wiseRelevance Propagation (LRP), an established explainability technique
developed for deep models incomputer vision, provides intuitive human-readable
heat maps of input images. We present the novelapplication of LRP for the first
time with structured datasets using a deep neural network (1D-CNN),for Credit
Card Fraud detection and Telecom Customer Churn prediction datasets. We show
how LRPis more effective than traditional explainability concepts of Local
Interpretable Model-agnostic Ex-planations (LIME) and Shapley Additive
Explanations (SHAP) for explainability. This effectivenessis both local to a
sample level and holistic over the whole testing set. We also discuss the
significantcomputational time advantage of LRP (1-2s) over LIME (22s) and SHAP
(108s), and thus its poten-tial for real time application scenarios. In
addition, our validation of LRP has highlighted features forenhancing model
performance, thus opening up a new area of research of using XAI as an
approachfor feature subset selection
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Visual Prompting Upgrades Neural Network Sparsification: A Data-Model Perspective [64.04617968947697]
We introduce a novel data-model co-design perspective: to promote superior weight sparsity.
Specifically, customized Visual Prompts are mounted to upgrade neural Network sparsification in our proposed VPNs framework.
arXiv Detail & Related papers (2023-12-03T13:50:24Z) - Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures [74.93762031957883]
This paper introduces ASTxplainer, an explainability method specific to Large Language Models for code.
At its core, ASTxplainer provides an automated method for aligning token predictions with AST nodes.
We perform an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects.
arXiv Detail & Related papers (2023-08-07T18:50:57Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Deep networks for system identification: a Survey [56.34005280792013]
System identification learns mathematical descriptions of dynamic systems from input-output data.
Main aim of the identified model is to predict new data from previous observations.
We discuss architectures commonly adopted in the literature, like feedforward, convolutional, and recurrent networks.
arXiv Detail & Related papers (2023-01-30T12:38:31Z) - A Detailed Study of Interpretability of Deep Neural Network based Top
Taggers [3.8541104292281805]
Recent developments in explainable AI (XAI) allow researchers to explore the inner workings of deep neural networks (DNNs)
We explore interpretability of models designed to identify jets coming from top quark decay in high energy proton-proton collisions at the Large Hadron Collider (LHC)
Our studies uncover some major pitfalls of existing XAI methods and illustrate how they can be overcome to obtain consistent and meaningful interpretation of these models.
arXiv Detail & Related papers (2022-10-09T23:02:42Z) - Building Reliable Explanations of Unreliable Neural Networks: Locally
Smoothing Perspective of Model Interpretation [0.0]
We present a novel method for reliably explaining the predictions of neural networks.
Our method is built on top of the assumption of smooth landscape in a loss function of the model prediction.
arXiv Detail & Related papers (2021-03-26T08:52:11Z) - Deep Knowledge Tracing with Learning Curves [0.9088303226909278]
We propose a Convolution-Augmented Knowledge Tracing (CAKT) model in this paper.
The model employs three-dimensional convolutional neural networks to explicitly learn a student's recent experience on applying the same knowledge concept with that in the next question.
CAKT achieves the new state-of-the-art performance in predicting students' responses compared with existing models.
arXiv Detail & Related papers (2020-07-26T15:24:51Z) - Towards Interpretable Deep Learning Models for Knowledge Tracing [62.75876617721375]
We propose to adopt the post-hoc method to tackle the interpretability issue for deep learning based knowledge tracing (DLKT) models.
Specifically, we focus on applying the layer-wise relevance propagation (LRP) method to interpret RNN-based DLKT model.
Experiment results show the feasibility using the LRP method for interpreting the DLKT model's predictions.
arXiv Detail & Related papers (2020-05-13T04:03:21Z) - Causality-aware counterfactual confounding adjustment for feature
representations learned by deep models [14.554818659491644]
Causal modeling has been recognized as a potential solution to many challenging problems in machine learning (ML)
We describe how a recently proposed counterfactual approach can still be used to deconfound the feature representations learned by deep neural network (DNN) models.
arXiv Detail & Related papers (2020-04-20T17:37:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.