An XAI Approach to Deep Learning Models in the Detection of DCIS
- URL: http://arxiv.org/abs/2106.14186v3
- Date: Sat, 28 Oct 2023 12:17:10 GMT
- Title: An XAI Approach to Deep Learning Models in the Detection of DCIS
- Authors: Michele La Ferla, Matthew Montebello and Dylan Seychell
- Abstract summary: The results showed that XAI could indeed be used as a proof of concept to begin discussions on the implementation of assistive AI systems within the clinical community.
- Score: 0.09208007322096533
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The results showed that XAI could indeed be used as a proof of concept to
begin discussions on the implementation of assistive AI systems within the
clinical community.
Related papers
- Explainable AI for Clinical Outcome Prediction: A Survey of Clinician Perceptions and Preferences [11.236899989769574]
Explainable AI (XAI) techniques are necessary to help clinicians make sense of AI predictions and integrate predictions into their decision-making workflow.
We implement four XAI techniques on an outcome prediction model that uses ICU admission notes to predict a patient's likelihood of experiencing in-hospital mortality.
We conduct a survey study of 32 practicing clinicians, collecting their feedback and preferences on the four techniques.
We synthesize our findings into a set of recommendations describing when each of the XAI techniques may be more appropriate, their potential limitations, as well as recommendations for improvement.
arXiv Detail & Related papers (2025-02-27T19:30:20Z) - A Survey on Human-Centered Evaluation of Explainable AI Methods in Clinical Decision Support Systems [45.89954090414204]
This paper provides a survey of human-centered evaluations of Explainable AI methods in Clinical Decision Support Systems.
Our findings reveal key challenges in the integration of XAI into healthcare and propose a structured framework to align the evaluation methods of XAI with the clinical needs of stakeholders.
arXiv Detail & Related papers (2025-02-14T01:21:29Z) - How Human-Centered Explainable AI Interface Are Designed and Evaluated: A Systematic Survey [48.97104365617498]
The emerging area of em Explainable Interfaces (EIs) focuses on the user interface and user experience design aspects of XAI.
This paper presents a systematic survey of 53 publications to identify current trends in human-XAI interaction and promising directions for EI design and development.
arXiv Detail & Related papers (2024-03-21T15:44:56Z) - How much informative is your XAI? A decision-making assessment task to
objectively measure the goodness of explanations [53.01494092422942]
The number and complexity of personalised and user-centred approaches to XAI have rapidly grown in recent years.
It emerged that user-centred approaches to XAI positively affect the interaction between users and systems.
We propose an assessment task to objectively and quantitatively measure the goodness of XAI systems.
arXiv Detail & Related papers (2023-12-07T15:49:39Z) - Polar-Net: A Clinical-Friendly Model for Alzheimer's Disease Detection
in OCTA Images [53.235117594102675]
Optical Coherence Tomography Angiography is a promising tool for detecting Alzheimer's disease (AD) by imaging the retinal microvasculature.
We propose a novel deep-learning framework called Polar-Net to provide interpretable results and leverage clinical prior knowledge.
We show that Polar-Net outperforms existing state-of-the-art methods and provides more valuable pathological evidence for the association between retinal vascular changes and AD.
arXiv Detail & Related papers (2023-11-10T11:49:49Z) - Deciphering knee osteoarthritis diagnostic features with explainable
artificial intelligence: A systematic review [4.918419052486409]
Existing artificial intelligence models for diagnosing knee osteoarthritis (OA) have faced criticism for their lack of transparency and interpretability.
Recently, explainable artificial intelligence (XAI) has emerged as a specialized technique that can provide confidence in the model's prediction.
This paper presents the first survey of XAI techniques used for knee OA diagnosis.
arXiv Detail & Related papers (2023-08-18T08:23:47Z) - Evaluation of Popular XAI Applied to Clinical Prediction Models: Can
They be Trusted? [2.0089256058364358]
The absence of transparency and explainability hinders the clinical adoption of Machine learning (ML) algorithms.
This study evaluates two popular XAI methods used for explaining predictive models in the healthcare context.
arXiv Detail & Related papers (2023-06-21T02:29:30Z) - A System's Approach Taxonomy for User-Centred XAI: A Survey [0.6882042556551609]
We propose a unified, inclusive and user-centred taxonomy for XAI based on the principles of General System's Theory.
This provides a basis for evaluating the appropriateness of XAI approaches for all user types, including both developers and end users.
arXiv Detail & Related papers (2023-03-06T00:50:23Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - What Do End-Users Really Want? Investigation of Human-Centered XAI for
Mobile Health Apps [69.53730499849023]
We present a user-centered persona concept to evaluate explainable AI (XAI)
Results show that users' demographics and personality, as well as the type of explanation, impact explanation preferences.
Our insights bring an interactive, human-centered XAI closer to practical application.
arXiv Detail & Related papers (2022-10-07T12:51:27Z) - Connecting Algorithmic Research and Usage Contexts: A Perspective of
Contextualized Evaluation for Explainable AI [65.44737844681256]
A lack of consensus on how to evaluate explainable AI (XAI) hinders the advancement of the field.
We argue that one way to close the gap is to develop evaluation methods that account for different user requirements.
arXiv Detail & Related papers (2022-06-22T05:17:33Z) - Explainable Artificial Intelligence Methods in Combating Pandemics: A
Systematic Review [7.140215556873923]
The impact of artificial intelligence during the COVID-19 pandemic was greatly limited by lack of model transparency.
We find that successful use of XAI can improve model performance, instill trust in the end-user, and provide the value needed to affect user decision-making.
arXiv Detail & Related papers (2021-12-23T16:55:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.