Explainable AI And Visual Reasoning: Insights From Radiology
- URL: http://arxiv.org/abs/2304.03318v1
- Date: Thu, 6 Apr 2023 18:30:27 GMT
- Title: Explainable AI And Visual Reasoning: Insights From Radiology
- Authors: Robert Kaufman, David Kirsh
- Abstract summary: We show that machine-learned classifications lack evidentiary grounding and fail to elicit trust and adoption by potential users.
This study may generalize to guiding principles for human-centered explanation design based on human reasoning and justification of evidence.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Why do explainable AI (XAI) explanations in radiology, despite their promise
of transparency, still fail to gain human trust? Current XAI approaches provide
justification for predictions, however, these do not meet practitioners' needs.
These XAI explanations lack intuitive coverage of the evidentiary basis for a
given classification, posing a significant barrier to adoption. We posit that
XAI explanations that mirror human processes of reasoning and justification
with evidence may be more useful and trustworthy than traditional visual
explanations like heat maps. Using a radiology case study, we demonstrate how
radiology practitioners get other practitioners to see a diagnostic
conclusion's validity. Machine-learned classifications lack this evidentiary
grounding and consequently fail to elicit trust and adoption by potential
users. Insights from this study may generalize to guiding principles for
human-centered explanation design based on human reasoning and justification of
evidence.
Related papers
- Fool Me Once? Contrasting Textual and Visual Explanations in a Clinical Decision-Support Setting [43.110187812734864]
We evaluate three types of explanations: visual explanations (saliency maps), natural language explanations, and a combination of both modalities.
We find that text-based explanations lead to significant over-reliance, which is alleviated by combining them with saliency maps.
We also observe that the quality of explanations, that is, how much factually correct information they entail, and how much this aligns with AI correctness, significantly impacts the usefulness of the different explanation types.
arXiv Detail & Related papers (2024-10-16T06:43:02Z) - People Attribute Purpose to Autonomous Vehicles When Explaining Their Behavior: Insights from Cognitive Science for Explainable AI [22.138074429937795]
It is often argued that effective human-centered explainable artificial intelligence (XAI) should resemble human reasoning.
We propose a framework of explanatory modes to analyze how people frame explanations, whether mechanistic, teleological, or counterfactual.
Our main finding is that participants deem teleological explanations significantly better quality than counterfactual ones, with perceived teleology being the best predictor of perceived quality.
arXiv Detail & Related papers (2024-03-11T11:48:50Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - A Turing Test for Transparency [0.0]
A central goal of explainable artificial intelligence (XAI) is to improve the trust relationship in human-AI interaction.
Recent empirical evidence shows that explanations can have the opposite effect.
This effect challenges the very goal of XAI and implies that responsible usage of transparent AI methods has to consider the ability of humans to distinguish machine generated from human explanations.
arXiv Detail & Related papers (2021-06-21T20:09:40Z) - Explainable AI for medical imaging: Explaining pneumothorax diagnoses
with Bayesian Teaching [4.707325679181196]
We introduce and evaluate explanations based on Bayesian Teaching.
We find that medical experts exposed to explanations successfully predict the AI's diagnostic decisions.
These results show that Explainable AI can be used to support human-AI collaboration in medical imaging.
arXiv Detail & Related papers (2021-06-08T20:49:11Z) - Explainable AI meets Healthcare: A Study on Heart Disease Dataset [0.0]
The aim is to enlighten practitioners on the understandability and interpretability of explainable AI systems using a variety of techniques.
Our paper contains examples based on the heart disease dataset and elucidates on how the explainability techniques should be preferred to create trustworthiness.
arXiv Detail & Related papers (2020-11-06T05:18:43Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - The role of explainability in creating trustworthy artificial
intelligence for health care: a comprehensive survey of the terminology,
design choices, and evaluation strategies [1.2762298148425795]
Lack of transparency is identified as one of the main barriers to implementation of AI systems in health care.
We review the recent literature to provide guidance to researchers and practitioners on the design of explainable AI systems.
We conclude that explainable modelling can contribute to trustworthy AI, but the benefits of explainability still need to be proven in practice.
arXiv Detail & Related papers (2020-07-31T09:08:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.