GLIME: A new graphical methodology for interpretable model-agnostic
explanations
- URL: http://arxiv.org/abs/2107.09927v1
- Date: Wed, 21 Jul 2021 08:06:40 GMT
- Title: GLIME: A new graphical methodology for interpretable model-agnostic
explanations
- Authors: Zoumpolia Dikopoulou, Serafeim Moustakidis, Patrik Karlsson
- Abstract summary: This paper contributes to the development of a novel graphical explainability tool for black box models.
The proposed XAI methodology, termed as gLIME, provides graphical model-agnostic explanations either at the global (for the entire dataset) or the local scale (for specific data points)
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Explainable artificial intelligence (XAI) is an emerging new domain in which
a set of processes and tools allow humans to better comprehend the decisions
generated by black box models. However, most of the available XAI tools are
often limited to simple explanations mainly quantifying the impact of
individual features to the models' output. Therefore, human users are not able
to understand how the features are related to each other to make predictions,
whereas the inner workings of the trained models remain hidden. This paper
contributes to the development of a novel graphical explainability tool that
not only indicates the significant features of the model but also reveals the
conditional relationships between features and the inference capturing both the
direct and indirect impact of features to the models' decision. The proposed
XAI methodology, termed as gLIME, provides graphical model-agnostic
explanations either at the global (for the entire dataset) or the local scale
(for specific data points). It relies on a combination of local interpretable
model-agnostic explanations (LIME) with graphical least absolute shrinkage and
selection operator (GLASSO) producing undirected Gaussian graphical models.
Regularization is adopted to shrink small partial correlation coefficients to
zero providing sparser and more interpretable graphical explanations. Two
well-known classification datasets (BIOPSY and OAI) were selected to confirm
the superiority of gLIME over LIME in terms of both robustness and consistency
over multiple permutations. Specifically, gLIME accomplished increased
stability over the two datasets with respect to features' importance (76%-96%
compared to 52%-77% using LIME). gLIME demonstrates a unique potential to
extend the functionality of the current state-of-the-art in XAI by providing
informative graphically given explanations that could unlock black boxes.
Related papers
- Derivative-Free Diffusion Manifold-Constrained Gradient for Unified XAI [59.96044730204345]
We introduce Derivative-Free Diffusion Manifold-Constrainted Gradients (FreeMCG)
FreeMCG serves as an improved basis for explainability of a given neural network.
We show that our method yields state-of-the-art results while preserving the essential properties expected of XAI tools.
arXiv Detail & Related papers (2024-11-22T11:15:14Z) - Explainable Artificial Intelligence for Dependent Features: Additive Effects of Collinearity [0.0]
We propose an Additive Effects of Collinearity (AEC) as a novel XAI method to consider the collinearity issue.
The proposed method is implemented using simulated and real data to validate its efficiency comparing with the a state of arts XAI method.
arXiv Detail & Related papers (2024-10-30T07:00:30Z) - How Do Large Language Models Understand Graph Patterns? A Benchmark for Graph Pattern Comprehension [53.6373473053431]
This work introduces a benchmark to assess large language models' capabilities in graph pattern tasks.
We have developed a benchmark that evaluates whether LLMs can understand graph patterns based on either terminological or topological descriptions.
Our benchmark encompasses both synthetic and real datasets, and a variety of models, with a total of 11 tasks and 7 models.
arXiv Detail & Related papers (2024-10-04T04:48:33Z) - MOUNTAINEER: Topology-Driven Visual Analytics for Comparing Local Explanations [6.835413642522898]
Topological Data Analysis (TDA) can be an effective method in this domain since it can be used to transform attributions into uniform graph representations.
We present a novel topology-driven visual analytics tool, Mountaineer, that allows ML practitioners to interactively analyze and compare these representations.
We show how Mountaineer enabled us to compare black-box ML explanations and discern regions of and causes of disagreements between different explanations.
arXiv Detail & Related papers (2024-06-21T19:28:50Z) - Graph Relation Aware Continual Learning [3.908470250825618]
Continual graph learning (CGL) studies the problem of learning from an infinite stream of graph data.
We design a relation-aware adaptive model, dubbed as RAM-CG, that consists of a relation-discovery modular to explore latent relations behind edges.
RAM-CG provides significant 2.2%, 6.9% and 6.6% accuracy improvements over the state-of-the-art results on CitationNet, OGBN-arxiv and TWITCH dataset.
arXiv Detail & Related papers (2023-08-16T09:53:20Z) - A Perspective on Explainable Artificial Intelligence Methods: SHAP and LIME [4.328967621024592]
We propose a framework for interpretation of two widely used XAI methods.
We discuss their outcomes in terms of model-dependency and in the presence of collinearity.
The results indicate that SHAP and LIME are highly affected by the adopted ML model and feature collinearity, raising a note of caution on their usage and interpretation.
arXiv Detail & Related papers (2023-05-03T10:04:46Z) - Studying How to Efficiently and Effectively Guide Models with Explanations [52.498055901649025]
'Model guidance' is the idea of regularizing the models' explanations to ensure that they are "right for the right reasons"
We conduct an in-depth evaluation across various loss functions, attribution methods, models, and 'guidance depths' on the PASCAL VOC 2007 and MS COCO 2014 datasets.
Specifically, we guide the models via bounding box annotations, which are much cheaper to obtain than the commonly used segmentation masks.
arXiv Detail & Related papers (2023-03-21T15:34:50Z) - GAM(e) changer or not? An evaluation of interpretable machine learning
models based on additive model constraints [5.783415024516947]
This paper investigates a series of intrinsically interpretable machine learning models.
We evaluate the prediction qualities of five GAMs as compared to six traditional ML models.
arXiv Detail & Related papers (2022-04-19T20:37:31Z) - Model-Agnostic Graph Regularization for Few-Shot Learning [60.64531995451357]
We present a comprehensive study on graph embedded few-shot learning.
We introduce a graph regularization approach that allows a deeper understanding of the impact of incorporating graph information between labels.
Our approach improves the performance of strong base learners by up to 2% on Mini-ImageNet and 6.7% on ImageNet-FS.
arXiv Detail & Related papers (2021-02-14T05:28:13Z) - Structural Landmarking and Interaction Modelling: on Resolution Dilemmas
in Graph Classification [50.83222170524406]
We study the intrinsic difficulty in graph classification under the unified concept of resolution dilemmas''
We propose SLIM'', an inductive neural network model for Structural Landmarking and Interaction Modelling.
arXiv Detail & Related papers (2020-06-29T01:01:42Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.