Explainable AI For COVID-19 CT Classifiers: An Initial Comparison Study
- URL: http://arxiv.org/abs/2104.14506v1
- Date: Sun, 25 Apr 2021 23:39:14 GMT
- Title: Explainable AI For COVID-19 CT Classifiers: An Initial Comparison Study
- Authors: Qinghao Ye and Jun Xia and Guang Yang
- Abstract summary: Explainable AI (XAI) is the key to unlocking AI and the black-box for deep learning.
Chest CT has emerged as a valuable tool for the clinical diagnostic and treatment management of the lung diseases associated with COVID-19.
The aim of this study is to propose and develop XAI strategies for COVID-19 classification models with an investigation of comparison.
- Score: 3.4031539425106683
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Artificial Intelligence (AI) has made leapfrogs in development across all the
industrial sectors especially when deep learning has been introduced. Deep
learning helps to learn the behaviour of an entity through methods of
recognising and interpreting patterns. Despite its limitless potential, the
mystery is how deep learning algorithms make a decision in the first place.
Explainable AI (XAI) is the key to unlocking AI and the black-box for deep
learning. XAI is an AI model that is programmed to explain its goals, logic,
and decision making so that the end users can understand. The end users can be
domain experts, regulatory agencies, managers and executive board members, data
scientists, users that use AI, with or without awareness, or someone who is
affected by the decisions of an AI model. Chest CT has emerged as a valuable
tool for the clinical diagnostic and treatment management of the lung diseases
associated with COVID-19. AI can support rapid evaluation of CT scans to
differentiate COVID-19 findings from other lung diseases. However, how these AI
tools or deep learning algorithms reach such a decision and which are the most
influential features derived from these neural networks with typically deep
layers are not clear. The aim of this study is to propose and develop XAI
strategies for COVID-19 classification models with an investigation of
comparison. The results demonstrate promising quantification and qualitative
visualisations that can further enhance the clinician's understanding and
decision making with more granular information from the results given by the
learned XAI models.
Related papers
- Study on the Helpfulness of Explainable Artificial Intelligence [0.0]
Legal, business, and ethical requirements motivate using effective XAI.
We propose to evaluate XAI methods via the user's ability to successfully perform a proxy task.
In other words, we address the helpfulness of XAI for human decision-making.
arXiv Detail & Related papers (2024-10-14T14:03:52Z) - Knowledge-Guided Prompt Learning for Lifespan Brain MR Image Segmentation [53.70131202548981]
We present a two-step segmentation framework employing Knowledge-Guided Prompt Learning (KGPL) for brain MRI.
Specifically, we first pre-train segmentation models on large-scale datasets with sub-optimal labels.
The introduction of knowledge-wise prompts captures semantic relationships between anatomical variability and biological processes.
arXiv Detail & Related papers (2024-07-31T04:32:43Z) - A Survey of Artificial Intelligence in Gait-Based Neurodegenerative Disease Diagnosis [51.07114445705692]
neurodegenerative diseases (NDs) traditionally require extensive healthcare resources and human effort for medical diagnosis and monitoring.
As a crucial disease-related motor symptom, human gait can be exploited to characterize different NDs.
The current advances in artificial intelligence (AI) models enable automatic gait analysis for NDs identification and classification.
arXiv Detail & Related papers (2024-05-21T06:44:40Z) - Enhancing Breast Cancer Diagnosis in Mammography: Evaluation and Integration of Convolutional Neural Networks and Explainable AI [0.0]
The study presents an integrated framework combining Convolutional Neural Networks (CNNs) and Explainable Artificial Intelligence (XAI) for the enhanced diagnosis of breast cancer.
The methodology encompasses an elaborate data preprocessing pipeline and advanced data augmentation techniques to counteract dataset limitations.
A focal point of our study is the evaluation of XAI's effectiveness in interpreting model predictions.
arXiv Detail & Related papers (2024-04-05T05:00:21Z) - An Explainable AI Framework for Artificial Intelligence of Medical
Things [2.7774194651211217]
We leverage a custom XAI framework, incorporating techniques such as Local Interpretable Model-Agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), and Gradient-weighted Class Activation Mapping (Grad-Cam)
The proposed framework enhances the effectiveness of strategic healthcare methods and aims to instill trust and promote understanding in AI-driven medical applications.
We apply the XAI framework to brain tumor detection as a use case demonstrating accurate and transparent diagnosis.
arXiv Detail & Related papers (2024-03-07T01:08:41Z) - An Interpretable Deep Learning Approach for Skin Cancer Categorization [0.0]
We use modern deep learning methods and explainable artificial intelligence (XAI) approaches to address the problem of skin cancer detection.
To categorize skin lesions, we employ four cutting-edge pre-trained models: XceptionNet, EfficientNetV2S, InceptionResNetV2, and EfficientNetV2M.
Our study shows how deep learning and explainable artificial intelligence (XAI) can improve skin cancer diagnosis.
arXiv Detail & Related papers (2023-12-17T12:11:38Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - When Accuracy Meets Privacy: Two-Stage Federated Transfer Learning
Framework in Classification of Medical Images on Limited Data: A COVID-19
Case Study [77.34726150561087]
COVID-19 pandemic has spread rapidly and caused a shortage of global medical resources.
CNN has been widely utilized and verified in analyzing medical images.
arXiv Detail & Related papers (2022-03-24T02:09:41Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - Unbox the Black-box for the Medical Explainable AI via Multi-modal and
Multi-centre Data Fusion: A Mini-Review, Two Showcases and Beyond [3.4031539425106683]
Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems' black-box choices are made.
Many of the machine learning algorithms can not manifest how and why a decision has been cast.
XAI becomes more and more crucial for deep learning powered applications, especially for medical and healthcare studies.
arXiv Detail & Related papers (2021-02-03T10:56:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.