Explanation as a process: user-centric construction of multi-level and
multi-modal explanations
- URL: http://arxiv.org/abs/2110.03759v1
- Date: Thu, 7 Oct 2021 19:26:21 GMT
- Title: Explanation as a process: user-centric construction of multi-level and
multi-modal explanations
- Authors: Bettina Finzel, David E. Tafler, Stephan Scheele and Ute Schmid
- Abstract summary: We present a process-based approach that combines multi-level and multi-modal explanations.
We use Inductive Logic Programming, an interpretable machine learning approach, to learn a comprehensible model.
- Score: 0.34410212782758043
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the last years, XAI research has mainly been concerned with developing new
technical approaches to explain deep learning models. Just recent research has
started to acknowledge the need to tailor explanations to different contexts
and requirements of stakeholders. Explanations must not only suit developers of
models, but also domain experts as well as end users. Thus, in order to satisfy
different stakeholders, explanation methods need to be combined. While
multi-modal explanations have been used to make model predictions more
transparent, less research has focused on treating explanation as a process,
where users can ask for information according to the level of understanding
gained at a certain point in time. Consequently, an opportunity to explore
explanations on different levels of abstraction should be provided besides
multi-modal explanations. We present a process-based approach that combines
multi-level and multi-modal explanations. The user can ask for textual
explanations or visualizations through conversational interaction in a
drill-down manner. We use Inductive Logic Programming, an interpretable machine
learning approach, to learn a comprehensible model. Further, we present an
algorithm that creates an explanatory tree for each example for which a
classifier decision is to be explained. The explanatory tree can be navigated
by the user to get answers of different levels of detail. We provide a
proof-of-concept implementation for concepts induced from a semantic net about
living beings.
Related papers
- Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - May I Ask a Follow-up Question? Understanding the Benefits of Conversations in Neural Network Explainability [17.052366688978935]
We investigate if free-form conversations can enhance users' comprehension of static explanations.
We measure the effect of the conversation on participants' ability to choose from three machine learning models.
Our findings highlight the importance of customized model explanations in the format of free-form conversations.
arXiv Detail & Related papers (2023-09-25T09:00:38Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Interpretable Deep Learning: Interpretations, Interpretability,
Trustworthiness, and Beyond [49.93153180169685]
We introduce and clarify two basic concepts-interpretations and interpretability-that people usually get confused.
We elaborate the design of several recent interpretation algorithms, from different perspectives, through proposing a new taxonomy.
We summarize the existing work in evaluating models' interpretability using "trustworthy" interpretation algorithms.
arXiv Detail & Related papers (2021-03-19T08:40:30Z) - Explanation from Specification [3.04585143845864]
We formulate an approach where the type of explanation produced is guided by a specification.
Two examples are discussed: explanations for Bayesian networks using the theory of argumentation, and explanations for graph neural networks.
The approach is motivated by a theory of explanation in the philosophy of science, and it is related to current questions in the philosophy of science on the role of machine learning.
arXiv Detail & Related papers (2020-12-13T23:27:48Z) - Towards Interpretable Natural Language Understanding with Explanations
as Latent Variables [146.83882632854485]
We develop a framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.
Our framework treats natural language explanations as latent variables that model the underlying reasoning process of a neural model.
arXiv Detail & Related papers (2020-10-24T02:05:56Z) - DECE: Decision Explorer with Counterfactual Explanations for Machine
Learning Models [36.50754934147469]
We exploit the potential of counterfactual explanations to understand and explore the behavior of machine learning models.
We design DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets.
arXiv Detail & Related papers (2020-08-19T09:44:47Z) - Sequential Explanations with Mental Model-Based Policies [20.64968620536829]
We apply a reinforcement learning framework to provide explanations based on the explainee's mental model.
We conduct novel online human experiments where explanations are selected and presented to participants.
Our results suggest that mental model-based policies may increase interpretability over multiple sequential explanations.
arXiv Detail & Related papers (2020-07-17T14:43:46Z) - LIMEtree: Interactively Customisable Explanations Based on Local
Surrogate Multi-output Regression Trees [21.58324172085553]
We introduce a model-agnostic and post-hoc local explainability technique for black-box predictions called LIMEtree.
We validate our algorithm on a deep neural network trained for object detection in images and compare it against Local Interpretable Model-agnostic Explanations (LIME)
Our method comes with local fidelity guarantees and can produce a range of diverse explanation types.
arXiv Detail & Related papers (2020-05-04T12:31:29Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.