Dermatologist-like explainable AI enhances trust and confidence in
diagnosing melanoma
- URL: http://arxiv.org/abs/2303.12806v1
- Date: Fri, 17 Mar 2023 17:25:55 GMT
- Title: Dermatologist-like explainable AI enhances trust and confidence in
diagnosing melanoma
- Authors: Tirtha Chanda, Katja Hauser, Sarah Hobelsberger, Tabea-Clara Bucher,
Carina Nogueira Garcia, Christoph Wies, Harald Kittler, Philipp Tschandl,
Cristian Navarrete-Dechent, Sebastian Podlipnik, Emmanouil Chousakos, Iva
Crnaric, Jovana Majstorovic, Linda Alhajwan, Tanya Foreman, Sandra Peternel,
Sergei Sarap, \.Irem \"Ozdemir, Raymond L. Barnhill, Mar Llamas Velasco,
Gabriela Poch, S\"oren Korsing, Wiebke Sondermann, Frank Friedrich Gellrich,
Markus V. Heppt, Michael Erdmann, Sebastian Haferkamp, Konstantin Drexler,
Matthias Goebeler, Bastian Schilling, Jochen S. Utikal, Kamran Ghoreschi,
Stefan Fr\"ohling, Eva Krieghoff-Henning, Titus J. Brinker
- Abstract summary: A lack of transparency in how artificial intelligence systems identify melanoma poses severe obstacles to user acceptance.
Most XAI methods are unable to produce precisely located domain-specific explanations, making the explanations difficult to interpret.
We developed an XAI system that produces text and region based explanations that are easily interpretable by dermatologists.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Although artificial intelligence (AI) systems have been shown to improve the
accuracy of initial melanoma diagnosis, the lack of transparency in how these
systems identify melanoma poses severe obstacles to user acceptance.
Explainable artificial intelligence (XAI) methods can help to increase
transparency, but most XAI methods are unable to produce precisely located
domain-specific explanations, making the explanations difficult to interpret.
Moreover, the impact of XAI methods on dermatologists has not yet been
evaluated. Extending on two existing classifiers, we developed an XAI system
that produces text and region based explanations that are easily interpretable
by dermatologists alongside its differential diagnoses of melanomas and nevi.
To evaluate this system, we conducted a three-part reader study to assess its
impact on clinicians' diagnostic accuracy, confidence, and trust in the
XAI-support. We showed that our XAI's explanations were highly aligned with
clinicians' explanations and that both the clinicians' trust in the support
system and their confidence in their diagnoses were significantly increased
when using our XAI compared to using a conventional AI system. The clinicians'
diagnostic accuracy was numerically, albeit not significantly, increased. This
work demonstrates that clinicians are willing to adopt such an XAI system,
motivating their future use in the clinic.
Related papers
- Dermatologist-like explainable AI enhances melanoma diagnosis accuracy: eye-tracking study [1.1876787296873537]
Artificial intelligence (AI) systems have substantially improved dermatologists' diagnostic accuracy for melanoma.
Despite these advancements, there remains a critical need for objective evaluation of how dermatologists engage with both AI and XAI tools.
In this study, 76 dermatologists participated in a reader study, diagnosing 16 dermoscopic images of melanomas and nevi using an XAI system that provides detailed, domain-specific explanations.
arXiv Detail & Related papers (2024-09-20T13:08:33Z) - Breast Cancer Diagnosis: A Comprehensive Exploration of Explainable Artificial Intelligence (XAI) Techniques [38.321248253111776]
Article explores the application of Explainable Artificial Intelligence (XAI) techniques in the detection and diagnosis of breast cancer.
Aims to highlight the potential of XAI in bridging the gap between complex AI models and practical healthcare applications.
arXiv Detail & Related papers (2024-06-01T18:50:03Z) - The Limits of Perception: Analyzing Inconsistencies in Saliency Maps in XAI [0.0]
Explainable artificial intelligence (XAI) plays an indispensable role in demystifying the decision-making processes of AI.
As they operate as "black boxes," with their reasoning obscured and inaccessible, there's an increased risk of misdiagnosis.
This shift towards transparency is not just beneficial -- it's a critical step towards responsible AI integration in healthcare.
arXiv Detail & Related papers (2024-03-23T02:15:23Z) - How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations [53.01494092422942]
The number and complexity of personalised and user-centred approaches to XAI have rapidly grown in recent years.
It emerged that user-centred approaches to XAI positively affect the interaction between users and systems.
We propose an assessment task to objectively and quantitatively measure the goodness of XAI systems.
arXiv Detail & Related papers (2023-12-07T15:49:39Z) - Deciphering knee osteoarthritis diagnostic features with explainable
artificial intelligence: A systematic review [4.918419052486409]
Existing artificial intelligence models for diagnosing knee osteoarthritis (OA) have faced criticism for their lack of transparency and interpretability.
Recently, explainable artificial intelligence (XAI) has emerged as a specialized technique that can provide confidence in the model's prediction.
This paper presents the first survey of XAI techniques used for knee OA diagnosis.
arXiv Detail & Related papers (2023-08-18T08:23:47Z) - Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges [58.32937972322058]
"Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image (MedAI 2021)" competitions.
We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic.
arXiv Detail & Related papers (2023-07-30T16:08:45Z) - XAI Renaissance: Redefining Interpretability in Medical Diagnostic
Models [0.0]
The XAI Renaissance aims to redefine the interpretability of medical diagnostic models.
XAI techniques empower healthcare professionals to understand, trust, and effectively utilize these models for accurate and reliable medical diagnoses.
arXiv Detail & Related papers (2023-06-02T16:42:20Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - AutoPrognosis 2.0: Democratizing Diagnostic and Prognostic Modeling in
Healthcare with Automated Machine Learning [72.2614468437919]
We present a machine learning framework, AutoPrognosis 2.0, to develop diagnostic and prognostic models.
We provide an illustrative application where we construct a prognostic risk score for diabetes using the UK Biobank.
Our risk score has been implemented as a web-based decision support tool and can be publicly accessed by patients and clinicians worldwide.
arXiv Detail & Related papers (2022-10-21T16:31:46Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.