Local Interpretable Model Agnostic Shap Explanations for machine
learning models
- URL: http://arxiv.org/abs/2210.04533v1
- Date: Mon, 10 Oct 2022 10:07:27 GMT
- Title: Local Interpretable Model Agnostic Shap Explanations for machine
learning models
- Authors: P. Sai Ram Aditya, Mayukha Pal
- Abstract summary: We propose a methodology that we define as Local Interpretable Model Agnostic Shap Explanations (LIMASE)
This proposed technique uses Shapley values under the LIME paradigm to achieve the following (a) explain prediction of any model by using a locally faithful and interpretable decision tree model on which the Tree Explainer is used to calculate the shapley values and give visually interpretable explanations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the advancement of technology for artificial intelligence (AI) based
solutions and analytics compute engines, machine learning (ML) models are
getting more complex day by day. Most of these models are generally used as a
black box without user interpretability. Such complex ML models make it more
difficult for people to understand or trust their predictions. There are
variety of frameworks using explainable AI (XAI) methods to demonstrate
explainability and interpretability of ML models to make their predictions more
trustworthy. In this manuscript, we propose a methodology that we define as
Local Interpretable Model Agnostic Shap Explanations (LIMASE). This proposed ML
explanation technique uses Shapley values under the LIME paradigm to achieve
the following (a) explain prediction of any model by using a locally faithful
and interpretable decision tree model on which the Tree Explainer is used to
calculate the shapley values and give visually interpretable explanations. (b)
provide visually interpretable global explanations by plotting local
explanations of several data points. (c) demonstrate solution for the
submodular optimization problem. (d) also bring insight into regional
interpretation e) faster computation compared to use of kernel explainer.
Related papers
- MOUNTAINEER: Topology-Driven Visual Analytics for Comparing Local Explanations [6.835413642522898]
Topological Data Analysis (TDA) can be an effective method in this domain since it can be used to transform attributions into uniform graph representations.
We present a novel topology-driven visual analytics tool, Mountaineer, that allows ML practitioners to interactively analyze and compare these representations.
We show how Mountaineer enabled us to compare black-box ML explanations and discern regions of and causes of disagreements between different explanations.
arXiv Detail & Related papers (2024-06-21T19:28:50Z) - LLMs for XAI: Future Directions for Explaining Explanations [50.87311607612179]
We focus on refining explanations computed using existing XAI algorithms.
Initial experiments and user study suggest that LLMs offer a promising way to enhance the interpretability and usability of XAI.
arXiv Detail & Related papers (2024-05-09T19:17:47Z) - Pyreal: A Framework for Interpretable ML Explanations [51.14710806705126]
Pyreal is a system for generating a variety of interpretable machine learning explanations.
Pyreal converts data and explanations between the feature spaces expected by the model, relevant explanation algorithms, and human users.
Our studies demonstrate that Pyreal generates more useful explanations than existing systems.
arXiv Detail & Related papers (2023-12-20T15:04:52Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures [74.93762031957883]
This paper introduces ASTxplainer, an explainability method specific to Large Language Models for code.
At its core, ASTxplainer provides an automated method for aligning token predictions with AST nodes.
We perform an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects.
arXiv Detail & Related papers (2023-08-07T18:50:57Z) - Understanding Post-hoc Explainers: The Case of Anchors [6.681943980068051]
We present a theoretical analysis of a rule-based interpretability method that highlights a small set of words to explain a text's decision.
After formalizing its algorithm and providing useful insights, we demonstrate mathematically that Anchors produces meaningful results.
arXiv Detail & Related papers (2023-03-15T17:56:34Z) - GAM(e) changer or not? An evaluation of interpretable machine learning
models based on additive model constraints [5.783415024516947]
This paper investigates a series of intrinsically interpretable machine learning models.
We evaluate the prediction qualities of five GAMs as compared to six traditional ML models.
arXiv Detail & Related papers (2022-04-19T20:37:31Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - MeLIME: Meaningful Local Explanation for Machine Learning Models [2.819725769698229]
We show that our approach, MeLIME, produces more meaningful explanations compared to other techniques over different ML models.
MeLIME generalizes the LIME method, allowing more flexible perturbation sampling and the use of different local interpretable models.
arXiv Detail & Related papers (2020-09-12T16:06:58Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z) - Accurate and Intuitive Contextual Explanations using Linear Model Trees [0.0]
Local post hoc model explanations have gained massive adoption.
Current state of the art methods use rudimentary methods to generate synthetic data around the point to be explained.
We use a Generative Adversarial Network for synthetic data generation and train a piecewise linear model in the form of Linear Model Trees.
arXiv Detail & Related papers (2020-09-11T10:13:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.