Explainable AI for medical imaging: Explaining pneumothorax diagnoses
with Bayesian Teaching
- URL: http://arxiv.org/abs/2106.04684v1
- Date: Tue, 8 Jun 2021 20:49:11 GMT
- Title: Explainable AI for medical imaging: Explaining pneumothorax diagnoses
with Bayesian Teaching
- Authors: Tomas Folke, Scott Cheng-Hsin Yang, Sean Anderson, and Patrick Shafto
- Abstract summary: We introduce and evaluate explanations based on Bayesian Teaching.
We find that medical experts exposed to explanations successfully predict the AI's diagnostic decisions.
These results show that Explainable AI can be used to support human-AI collaboration in medical imaging.
- Score: 4.707325679181196
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Limited expert time is a key bottleneck in medical imaging. Due to advances
in image classification, AI can now serve as decision-support for medical
experts, with the potential for great gains in radiologist productivity and, by
extension, public health. However, these gains are contingent on building and
maintaining experts' trust in the AI agents. Explainable AI may build such
trust by helping medical experts to understand the AI decision processes behind
diagnostic judgements. Here we introduce and evaluate explanations based on
Bayesian Teaching, a formal account of explanation rooted in the cognitive
science of human learning. We find that medical experts exposed to explanations
generated by Bayesian Teaching successfully predict the AI's diagnostic
decisions and are more likely to certify the AI for cases when the AI is
correct than when it is wrong, indicating appropriate trust. These results show
that Explainable AI can be used to support human-AI collaboration in medical
imaging.
Related papers
- People over trust AI-generated medical responses and view them to be as valid as doctors, despite low accuracy [25.91497161129666]
A total of 300 participants gave evaluations for medical responses that were either written by a medical doctor on an online healthcare platform, or generated by a large language model.
Results showed that participants could not effectively distinguish between AI-generated and Doctors' responses.
arXiv Detail & Related papers (2024-08-11T23:41:28Z) - The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI [0.0]
Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI.
As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis.
This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare.
arXiv Detail & Related papers (2024-03-23T02:15:23Z) - The Role of AI in Drug Discovery: Challenges, Opportunities, and
Strategies [97.5153823429076]
The benefits, challenges and drawbacks of AI in this field are reviewed.
The use of data augmentation, explainable AI, and the integration of AI with traditional experimental methods are also discussed.
arXiv Detail & Related papers (2022-12-08T23:23:39Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Why we do need Explainable AI for Healthcare [0.0]
We argue that the Explainable AI research program is still central to human-machine interaction.
Despite valid concerns, we argue that the Explainable AI research program is still central to human-machine interaction.
arXiv Detail & Related papers (2022-06-30T15:35:50Z) - Cybertrust: From Explainable to Actionable and Interpretable AI (AI2) [58.981120701284816]
Actionable and Interpretable AI (AI2) will incorporate explicit quantifications and visualizations of user confidence in AI recommendations.
It will allow examining and testing of AI system predictions to establish a basis for trust in the systems' decision making.
arXiv Detail & Related papers (2022-01-26T18:53:09Z) - Explainable AI meets Healthcare: A Study on Heart Disease Dataset [0.0]
The aim is to enlighten practitioners on the understandability and interpretability of explainable AI systems using a variety of techniques.
Our paper contains examples based on the heart disease dataset and elucidates on how the explainability techniques should be preferred to create trustworthiness.
arXiv Detail & Related papers (2020-11-06T05:18:43Z) - Trust and Medical AI: The challenges we face and the expertise needed to
overcome them [15.07989177980542]
Failures of medical AI could have serious consequences for clinical outcomes and the patient experience.
This article describes the major conceptual, technical, and humanistic challenges in medical AI.
It proposes a solution that hinges on the education and accreditation of new expert groups who specialize in the development, verification, and operation of medical AI technologies.
arXiv Detail & Related papers (2020-08-18T04:17:58Z) - Artificial Artificial Intelligence: Measuring Influence of AI
'Assessments' on Moral Decision-Making [48.66982301902923]
We examined the effect of feedback from false AI on moral decision-making about donor kidney allocation.
We found some evidence that judgments about whether a patient should receive a kidney can be influenced by feedback about participants' own decision-making perceived to be given by AI.
arXiv Detail & Related papers (2020-01-13T14:15:18Z) - Effect of Confidence and Explanation on Accuracy and Trust Calibration
in AI-Assisted Decision Making [53.62514158534574]
We study whether features that reveal case-specific model information can calibrate trust and improve the joint performance of the human and AI.
We show that confidence score can help calibrate people's trust in an AI model, but trust calibration alone is not sufficient to improve AI-assisted decision making.
arXiv Detail & Related papers (2020-01-07T15:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.