Explaining Classifications to Non Experts: An XAI User Study of Post Hoc
Explanations for a Classifier When People Lack Expertise
- URL: http://arxiv.org/abs/2212.09342v1
- Date: Mon, 19 Dec 2022 10:19:05 GMT
- Title: Explaining Classifications to Non Experts: An XAI User Study of Post Hoc
Explanations for a Classifier When People Lack Expertise
- Authors: Courtney Ford and Mark T Keane
- Abstract summary: This paper reports a novel, user study on how peoples expertise in a domain affects their understanding of post-hoc explanations.
The results show that peoples understanding of explanations for correct and incorrect classifications changes dramatically, on several dimensions.
- Score: 7.881140597011731
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Very few eXplainable AI (XAI) studies consider how users understanding of
explanations might change depending on whether they know more or less about the
to be explained domain (i.e., whether they differ in their expertise). Yet,
expertise is a critical facet of most high stakes, human decision making (e.g.,
understanding how a trainee doctor differs from an experienced consultant).
Accordingly, this paper reports a novel, user study (N=96) on how peoples
expertise in a domain affects their understanding of post-hoc explanations by
example for a deep-learning, black box classifier. The results show that
peoples understanding of explanations for correct and incorrect classifications
changes dramatically, on several dimensions (e.g., response times, perceptions
of correctness and helpfulness), when the image-based domain considered is
familiar (i.e., MNIST) as opposed to unfamiliar (i.e., Kannada MNIST). The
wider implications of these new findings for XAI strategies are discussed.
Related papers
- Not All Explanations are Created Equal: Investigating the Pitfalls of Current XAI Evaluation [0.0]
XAI aims to create transparency in modern AI models by offering explanations of the models to human users.<n>Most studies done within this field conduct simple user surveys to analyze the difference between no explanations and those generated by their proposed solution.<n>Our study looks to highlight this pitfall: most explanations, regardless of quality or correctness, will increase user satisfaction.
arXiv Detail & Related papers (2025-09-27T08:30:38Z) - Fool Me Once? Contrasting Textual and Visual Explanations in a Clinical Decision-Support Setting [43.110187812734864]
We evaluate three types of explanations: visual explanations (saliency maps), natural language explanations, and a combination of both modalities.
We find that text-based explanations lead to significant over-reliance, which is alleviated by combining them with saliency maps.
We also observe that the quality of explanations, that is, how much factually correct information they entail, and how much this aligns with AI correctness, significantly impacts the usefulness of the different explanation types.
arXiv Detail & Related papers (2024-10-16T06:43:02Z) - Confident Teacher, Confident Student? A Novel User Study Design for Investigating the Didactic Potential of Explanations and their Impact on Uncertainty [1.0855602842179624]
We investigate the impact of explanations on human performance on a challenging visual task using Explainable Artificial Intelligence (XAI)
We find that users become more accurate in their annotations and demonstrate less uncertainty with AI assistance.
We also find negative effects of explanations: users tend to replicate the model's predictions more often when shown explanations.
arXiv Detail & Related papers (2024-09-10T12:59:50Z) - Disagreement amongst counterfactual explanations: How transparency can
be deceptive [0.0]
Counterfactual explanations are increasingly used as Explainable Artificial Intelligence technique.
Not every algorithm creates uniform explanations for the same instance.
Ethical issues arise when malicious agents use this diversity to fairwash an unfair machine learning model.
arXiv Detail & Related papers (2023-04-25T09:15:37Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Mitigating belief projection in explainable artificial intelligence via
Bayesian Teaching [4.864819846886143]
Explainable AI (XAI) attempts to improve human understanding but rarely accounts for how people typically reason about unfamiliar agents.
We propose explicitly modeling the human explainee via Bayesian Teaching, which evaluates explanations by how much they shift explainees' inferences toward a desired goal.
arXiv Detail & Related papers (2021-02-07T21:23:24Z) - Explainable AI and Adoption of Algorithmic Advisors: an Experimental
Study [0.6875312133832077]
We develop an experimental methodology where participants play a web-based game, during which they receive advice from either a human or an algorithmic advisor.
We evaluate whether the different types of explanations affect the readiness to adopt, willingness to pay and trust a financial AI consultant.
We find that the types of explanations that promote adoption during first encounter differ from those that are most successful following failure or when cost is involved.
arXiv Detail & Related papers (2021-01-05T09:34:38Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.