Survey of explainable machine learning with visual and granular methods
beyond quasi-explanations
- URL: http://arxiv.org/abs/2009.10221v1
- Date: Mon, 21 Sep 2020 23:39:06 GMT
- Title: Survey of explainable machine learning with visual and granular methods
beyond quasi-explanations
- Authors: Boris Kovalerchuk (1), Muhammad Aurangzeb Ahmad (2 and 3), Ankur
Teredesai (2 and 3) ((1) Department of Computer Science, Central Washington
University, USA (2) Department of Computer Science and Systems, University of
Washington Tacoma, USA (3) Kensci Inc., USA)
- Abstract summary: We focus on moving from quasi-explanations that dominate in ML to domain-specific explanation supported by granular visuals.
The paper includes results on theoretical limits to preserve n-D distances in lower dimensions, based on the Johnson-Lindenstrauss lemma.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper surveys visual methods of explainability of Machine Learning (ML)
with focus on moving from quasi-explanations that dominate in ML to
domain-specific explanation supported by granular visuals. ML interpretation is
fundamentally a human activity and visual methods are more readily
interpretable. While efficient visual representations of high-dimensional data
exist, the loss of interpretable information, occlusion, and clutter continue
to be a challenge, which lead to quasi-explanations. We start with the
motivation and the different definitions of explainability. The paper focuses
on a clear distinction between quasi-explanations and domain specific
explanations, and between explainable and an actually explained ML model that
are critically important for the explainability domain. We discuss foundations
of interpretability, overview visual interpretability and present several types
of methods to visualize the ML models. Next, we present methods of visual
discovery of ML models, with the focus on interpretable models, based on the
recently introduced concept of General Line Coordinates (GLC). These methods
take the critical step of creating visual explanations that are not merely
quasi-explanations but are also domain specific visual explanations while these
methods themselves are domain-agnostic. The paper includes results on
theoretical limits to preserve n-D distances in lower dimensions, based on the
Johnson-Lindenstrauss lemma, point-to-point and point-to-graph GLC approaches,
and real-world case studies. The paper also covers traditional visual methods
for understanding ML models, which include deep learning and time series
models. We show that many of these methods are quasi-explanations and need
further enhancement to become domain specific explanations. We conclude with
outlining open problems and current research frontiers.
Related papers
- MEGL: Multimodal Explanation-Guided Learning [23.54169888224728]
We propose a novel Multimodal Explanation-Guided Learning (MEGL) framework to enhance model interpretability and improve classification performance.
Our Saliency-Driven Textual Grounding (SDTG) approach integrates spatial information from visual explanations into textual rationales, providing spatially grounded and contextually rich explanations.
We validate MEGL on two new datasets, Object-ME and Action-ME, for image classification with multimodal explanations.
arXiv Detail & Related papers (2024-11-20T05:57:00Z) - Decoding Diffusion: A Scalable Framework for Unsupervised Analysis of Latent Space Biases and Representations Using Natural Language Prompts [68.48103545146127]
This paper proposes a novel framework for unsupervised exploration of diffusion latent spaces.
We directly leverage natural language prompts and image captions to map latent directions.
Our method provides a more scalable and interpretable understanding of the semantic knowledge encoded within diffusion models.
arXiv Detail & Related papers (2024-10-25T21:44:51Z) - MOUNTAINEER: Topology-Driven Visual Analytics for Comparing Local Explanations [6.835413642522898]
Topological Data Analysis (TDA) can be an effective method in this domain since it can be used to transform attributions into uniform graph representations.
We present a novel topology-driven visual analytics tool, Mountaineer, that allows ML practitioners to interactively analyze and compare these representations.
We show how Mountaineer enabled us to compare black-box ML explanations and discern regions of and causes of disagreements between different explanations.
arXiv Detail & Related papers (2024-06-21T19:28:50Z) - Sparsity-Guided Holistic Explanation for LLMs with Interpretable
Inference-Time Intervention [53.896974148579346]
Large Language Models (LLMs) have achieved unprecedented breakthroughs in various natural language processing domains.
The enigmatic black-box'' nature of LLMs remains a significant challenge for interpretability, hampering transparent and accountable applications.
We propose a novel methodology anchored in sparsity-guided techniques, aiming to provide a holistic interpretation of LLMs.
arXiv Detail & Related papers (2023-12-22T19:55:58Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - MinT: Boosting Generalization in Mathematical Reasoning via Multi-View
Fine-Tuning [53.90744622542961]
Reasoning in mathematical domains remains a significant challenge for small language models (LMs)
We introduce a new method that exploits existing mathematical problem datasets with diverse annotation styles.
Experimental results show that our strategy enables a LLaMA-7B model to outperform prior approaches.
arXiv Detail & Related papers (2023-07-16T05:41:53Z) - Logic-Based Explainability in Machine Learning [0.0]
The operation of the most successful Machine Learning models is incomprehensible for human decision makers.
In recent years, there have been efforts on devising approaches for explaining ML models.
This paper overviews the ongoing research efforts on computing rigorous model-based explanations of ML models.
arXiv Detail & Related papers (2022-10-24T13:43:07Z) - This looks more like that: Enhancing Self-Explaining Models by
Prototypical Relevance Propagation [17.485732906337507]
We present a case study of the self-explaining network, ProtoPNet, in the presence of a spectrum of artifacts.
We introduce a novel method for generating more precise model-aware explanations.
In order to obtain a clean dataset, we propose to use multi-view clustering strategies for segregating the artifact images.
arXiv Detail & Related papers (2021-08-27T09:55:53Z) - Explainable Matrix -- Visualization for Global and Local
Interpretability of Random Forest Classification Ensembles [78.6363825307044]
We propose Explainable Matrix (ExMatrix), a novel visualization method for Random Forest (RF) interpretability.
It employs a simple yet powerful matrix-like visual metaphor, where rows are rules, columns are features, and cells are rules predicates.
ExMatrix applicability is confirmed via different examples, showing how it can be used in practice to promote RF models interpretability.
arXiv Detail & Related papers (2020-05-08T21:03:48Z) - Explain and Improve: LRP-Inference Fine-Tuning for Image Captioning
Models [82.3793660091354]
This paper analyzes the predictions of image captioning models with attention mechanisms beyond visualizing the attention itself.
We develop variants of layer-wise relevance propagation (LRP) and gradient-based explanation methods, tailored to image captioning models with attention mechanisms.
arXiv Detail & Related papers (2020-01-04T05:15:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.