The Grammar of Interactive Explanatory Model Analysis
- URL: http://arxiv.org/abs/2005.00497v4
- Date: Wed, 4 May 2022 14:35:46 GMT
- Title: The Grammar of Interactive Explanatory Model Analysis
- Authors: Hubert Baniecki, Dariusz Parzych, Przemyslaw Biecek
- Abstract summary: We show how different Explanatory Model Analysis (EMA) methods complement each other.
We formalize the grammar of IEMA to describe potential human-model dialogues.
IEMA is implemented in a widely used human-centered open-source software framework.
- Score: 7.812073412066698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing need for in-depth analysis of predictive models leads to a series
of new methods for explaining their local and global properties. Which of these
methods is the best? It turns out that this is an ill-posed question. One
cannot sufficiently explain a black-box machine learning model using a single
method that gives only one perspective. Isolated explanations are prone to
misunderstanding, leading to wrong or simplistic reasoning. This problem is
known as the Rashomon effect and refers to diverse, even contradictory,
interpretations of the same phenomenon. Surprisingly, most methods developed
for explainable and responsible machine learning focus on a single-aspect of
the model behavior. In contrast, we showcase the problem of explainability as
an interactive and sequential analysis of a model. This paper proposes how
different Explanatory Model Analysis (EMA) methods complement each other and
discusses why it is essential to juxtapose them. The introduced process of
Interactive EMA (IEMA) derives from the algorithmic side of explainable machine
learning and aims to embrace ideas developed in cognitive sciences. We
formalize the grammar of IEMA to describe potential human-model dialogues. It
is implemented in a widely used human-centered open-source software framework
that adopts interactivity, customizability and automation as its main traits.
We conduct a user study to evaluate the usefulness of IEMA, which indicates
that an interactive sequential analysis of a model increases the performance
and confidence of human decision making.
Related papers
- Unified Explanations in Machine Learning Models: A Perturbation Approach [0.0]
Inconsistencies between XAI and modeling techniques can have the undesirable effect of casting doubt upon the efficacy of these explainability approaches.
We propose a systematic, perturbation-based analysis against a popular, model-agnostic method in XAI, SHapley Additive exPlanations (Shap)
We devise algorithms to generate relative feature importance in settings of dynamic inference amongst a suite of popular machine learning and deep learning methods, and metrics that allow us to quantify how well explanations generated under the static case hold.
arXiv Detail & Related papers (2024-05-30T16:04:35Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Learning by Self-Explaining [23.420673675343266]
We introduce a novel workflow in the context of image classification, termed Learning by Self-Explaining (LSX)
LSX utilizes aspects of self-refining AI and human-guided explanatory machine learning.
Our results indicate improvements via Learning by Self-Explaining on several levels.
arXiv Detail & Related papers (2023-09-15T13:41:57Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Leveraging Explanations in Interactive Machine Learning: An Overview [10.284830265068793]
Explanations have gained an increasing level of interest in the AI and Machine Learning (ML) communities.
This paper presents an overview of research where explanations are combined with interactive capabilities.
arXiv Detail & Related papers (2022-07-29T07:46:11Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - How to Answer Why -- Evaluating the Explanations of AI Through Mental
Model Analysis [0.0]
Key question for human-centered AI research is how to validly survey users' mental models.
We evaluate whether mental models are suitable as an empirical research method.
We propose an exemplary method to evaluate explainable AI approaches in a human-centered way.
arXiv Detail & Related papers (2020-01-11T17:15:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.