The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI
- URL: http://arxiv.org/abs/2403.15684v1
- Date: Sat, 23 Mar 2024 02:15:23 GMT
- Title: The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI
- Authors: Anna Stubbin, Thompson Chyrikov, Jim Zhao, Christina Chajo,
- Abstract summary: Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI.
As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis.
This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI, especially within the healthcare industry. Clinicians rely heavily on detailed reasoning when making a diagnosis, often CT scans for specific features that distinguish between benign and malignant lesions. A comprehensive diagnostic approach includes an evaluation of imaging results, patient observations, and clinical tests. The surge in deploying deep learning models as support systems in medical diagnostics has been significant, offering advances that traditional methods could not. However, the complexity and opacity of these models present a double-edged sword. As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis, which can lead to patient harm. Hence, there is a pressing need to cultivate transparency within AI systems, ensuring that the rationale behind an AI's diagnostic recommendations is clear and understandable to medical practitioners. This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare, ensuring that AI aids rather than hinders medical professionals in their crucial work.
Related papers
- Dermatologist-like explainable AI enhances melanoma diagnosis accuracy: eye-tracking study [1.1876787296873537]
Artificial intelligence (AI) systems have substantially improved dermatologists' diagnostic accuracy for melanoma.
Despite these advancements, there remains a critical need for objective evaluation of how dermatologists engage with both AI and XAI tools.
In this study, 76 dermatologists participated in a reader study, diagnosing 16 dermoscopic images of melanomas and nevi using an XAI system that provides detailed, domain-specific explanations.
arXiv Detail & Related papers (2024-09-20T13:08:33Z) - Enhancing Breast Cancer Diagnosis in Mammography: Evaluation and Integration of Convolutional Neural Networks and Explainable AI [0.0]
The study presents an integrated framework combining Convolutional Neural Networks (CNNs) and Explainable Artificial Intelligence (XAI) for the enhanced diagnosis of breast cancer.
The methodology encompasses an elaborate data preprocessing pipeline and advanced data augmentation techniques to counteract dataset limitations.
A focal point of our study is the evaluation of XAI's effectiveness in interpreting model predictions.
arXiv Detail & Related papers (2024-04-05T05:00:21Z) - Deciphering knee osteoarthritis diagnostic features with explainable
artificial intelligence: A systematic review [4.918419052486409]
Existing artificial intelligence models for diagnosing knee osteoarthritis (OA) have faced criticism for their lack of transparency and interpretability.
Recently, explainable artificial intelligence (XAI) has emerged as a specialized technique that can provide confidence in the model's prediction.
This paper presents the first survey of XAI techniques used for knee OA diagnosis.
arXiv Detail & Related papers (2023-08-18T08:23:47Z) - Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges [58.32937972322058]
"Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image (MedAI 2021)" competitions.
We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic.
arXiv Detail & Related papers (2023-07-30T16:08:45Z) - XAI Renaissance: Redefining Interpretability in Medical Diagnostic
Models [0.0]
The XAI Renaissance aims to redefine the interpretability of medical diagnostic models.
XAI techniques empower healthcare professionals to understand, trust, and effectively utilize these models for accurate and reliable medical diagnoses.
arXiv Detail & Related papers (2023-06-02T16:42:20Z) - Dermatologist-like explainable AI enhances trust and confidence in
diagnosing melanoma [0.0]
A lack of transparency in how artificial intelligence systems identify melanoma poses severe obstacles to user acceptance.
Most XAI methods are unable to produce precisely located domain-specific explanations, making the explanations difficult to interpret.
We developed an XAI system that produces text and region based explanations that are easily interpretable by dermatologists.
arXiv Detail & Related papers (2023-03-17T17:25:55Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - Towards the Use of Saliency Maps for Explaining Low-Quality
Electrocardiograms to End Users [45.62380752173638]
When using medical images for diagnosis, it is important that the images are of high quality.
In telemedicine, a common problem is that the quality issue is only flagged once the patient has left the clinic, meaning they must return in order to have the exam redone.
This paper reports on the development of an AI system for flagging and explaining low-quality medical images in real-time.
arXiv Detail & Related papers (2022-07-06T14:53:26Z) - Robust and Efficient Medical Imaging with Self-Supervision [80.62711706785834]
We present REMEDIS, a unified representation learning strategy to improve robustness and data-efficiency of medical imaging AI.
We study a diverse range of medical imaging tasks and simulate three realistic application scenarios using retrospective data.
arXiv Detail & Related papers (2022-05-19T17:34:18Z) - Towards Causality-Aware Inferring: A Sequential Discriminative Approach
for Medical Diagnosis [142.90770786804507]
Medical diagnosis assistant (MDA) aims to build an interactive diagnostic agent to sequentially inquire about symptoms for discriminating diseases.
This work attempts to address these critical issues in MDA by taking advantage of the causal diagram.
We propose a propensity-based patient simulator to effectively answer unrecorded inquiry by drawing knowledge from the other records.
arXiv Detail & Related papers (2020-03-14T02:05:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.