Stop ordering machine learning algorithms by their explainability! A
user-centered investigation of performance and explainability
- URL: http://arxiv.org/abs/2206.10610v1
- Date: Mon, 20 Jun 2022 08:32:38 GMT
- Title: Stop ordering machine learning algorithms by their explainability! A
user-centered investigation of performance and explainability
- Authors: Lukas-Valentin Herm, Kai Heinrich, Jonas Wanner, Christian Janiesch
- Abstract summary: We study tradeoff between model performance and explainability of machine learning algorithms.
We find that the tradeoff is much less gradual in the end user's perception.
Results of our second experiment show that while explainable artificial intelligence augmentations can be used to increase explainability, the type of explanation plays an essential role in end user perception.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine learning algorithms enable advanced decision making in contemporary
intelligent systems. Research indicates that there is a tradeoff between their
model performance and explainability. Machine learning models with higher
performance are often based on more complex algorithms and therefore lack
explainability and vice versa. However, there is little to no empirical
evidence of this tradeoff from an end user perspective. We aim to provide
empirical evidence by conducting two user experiments. Using two distinct
datasets, we first measure the tradeoff for five common classes of machine
learning algorithms. Second, we address the problem of end user perceptions of
explainable artificial intelligence augmentations aimed at increasing the
understanding of the decision logic of high-performing complex models. Our
results diverge from the widespread assumption of a tradeoff curve and indicate
that the tradeoff between model performance and explainability is much less
gradual in the end user's perception. This is a stark contrast to assumed
inherent model interpretability. Further, we found the tradeoff to be
situational for example due to data complexity. Results of our second
experiment show that while explainable artificial intelligence augmentations
can be used to increase explainability, the type of explanation plays an
essential role in end user perception.
Related papers
- Visual-O1: Understanding Ambiguous Instructions via Multi-modal Multi-turn Chain-of-thoughts Reasoning [53.45295657891099]
This paper proposes Visual-O1, a multi-modal multi-turn chain-of-thought reasoning framework.
It simulates human multi-modal multi-turn reasoning, providing instantial experience for highly intelligent models.
Our work highlights the potential of artificial intelligence to work like humans in real-world scenarios with uncertainty and ambiguity.
arXiv Detail & Related papers (2024-10-04T11:18:41Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Interpretable Imitation Learning with Dynamic Causal Relations [65.18456572421702]
We propose to expose captured knowledge in the form of a directed acyclic causal graph.
We also design this causal discovery process to be state-dependent, enabling it to model the dynamics in latent causal graphs.
The proposed framework is composed of three parts: a dynamic causal discovery module, a causality encoding module, and a prediction module, and is trained in an end-to-end manner.
arXiv Detail & Related papers (2023-09-30T20:59:42Z) - Counterfactual Collaborative Reasoning [41.89113539041682]
Causal reasoning and logical reasoning are two important types of reasoning abilities for human intelligence.
We propose Counterfactual Collaborative Reasoning, which conducts counterfactual logic reasoning to improve the performance.
Experiments on three real-world datasets show that CCR achieves better performance than non-augmented models and implicitly augmented models.
arXiv Detail & Related papers (2023-06-30T23:01:10Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Learning outside the Black-Box: The pursuit of interpretable models [78.32475359554395]
This paper proposes an algorithm that produces a continuous global interpretation of any given continuous black-box function.
Our interpretation represents a leap forward from the previous state of the art.
arXiv Detail & Related papers (2020-11-17T12:39:44Z) - Human-interpretable model explainability on high-dimensional data [8.574682463936007]
We introduce a framework for human-interpretable explainability on high-dimensional data, consisting of two modules.
First, we apply a semantically meaningful latent representation, both to reduce the raw dimensionality of the data, and to ensure its human interpretability.
Second, we adapt the Shapley paradigm for model-agnostic explainability to operate on these latent features. This leads to interpretable model explanations that are both theoretically controlled and computationally tractable.
arXiv Detail & Related papers (2020-10-14T20:06:28Z) - Don't Explain without Verifying Veracity: An Evaluation of Explainable
AI with Video Activity Recognition [24.10997778856368]
This paper explores how explanation veracity affects user performance and agreement in intelligent systems.
We compare variations in explanation veracity for a video review and querying task.
Results suggest that low veracity explanations significantly decrease user performance and agreement.
arXiv Detail & Related papers (2020-05-05T17:06:46Z) - Intelligence, physics and information -- the tradeoff between accuracy
and simplicity in machine learning [5.584060970507507]
I believe viewing intelligence in terms of many integral aspects, and a universal two-term tradeoff between task performance and complexity, provides two feasible perspectives.
In this thesis, I address several key questions in some aspects of intelligence, and study the phase transitions in the two-term tradeoff.
arXiv Detail & Related papers (2020-01-11T18:34:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.