An Explainable AI Framework for Artificial Intelligence of Medical
Things
- URL: http://arxiv.org/abs/2403.04130v1
- Date: Thu, 7 Mar 2024 01:08:41 GMT
- Title: An Explainable AI Framework for Artificial Intelligence of Medical
Things
- Authors: Al Amin, Kamrul Hasan, Saleh Zein-Sabatto, Deo Chimba, Imtiaz Ahmed,
and Tariqul Islam
- Abstract summary: We leverage a custom XAI framework, incorporating techniques such as Local Interpretable Model-Agnostic Explanations (LIME), SHapley Additive exPlanations (SHAP), and Gradient-weighted Class Activation Mapping (Grad-Cam)
The proposed framework enhances the effectiveness of strategic healthcare methods and aims to instill trust and promote understanding in AI-driven medical applications.
We apply the XAI framework to brain tumor detection as a use case demonstrating accurate and transparent diagnosis.
- Score: 2.7774194651211217
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The healthcare industry has been revolutionized by the convergence of
Artificial Intelligence of Medical Things (AIoMT), allowing advanced
data-driven solutions to improve healthcare systems. With the increasing
complexity of Artificial Intelligence (AI) models, the need for Explainable
Artificial Intelligence (XAI) techniques become paramount, particularly in the
medical domain, where transparent and interpretable decision-making becomes
crucial. Therefore, in this work, we leverage a custom XAI framework,
incorporating techniques such as Local Interpretable Model-Agnostic
Explanations (LIME), SHapley Additive exPlanations (SHAP), and
Gradient-weighted Class Activation Mapping (Grad-Cam), explicitly designed for
the domain of AIoMT. The proposed framework enhances the effectiveness of
strategic healthcare methods and aims to instill trust and promote
understanding in AI-driven medical applications. Moreover, we utilize a
majority voting technique that aggregates predictions from multiple
convolutional neural networks (CNNs) and leverages their collective
intelligence to make robust and accurate decisions in the healthcare system.
Building upon this decision-making process, we apply the XAI framework to brain
tumor detection as a use case demonstrating accurate and transparent diagnosis.
Evaluation results underscore the exceptional performance of the XAI framework,
achieving high precision, recall, and F1 scores with a training accuracy of 99%
and a validation accuracy of 98%. Combining advanced XAI techniques with
ensemble-based deep-learning (DL) methodologies allows for precise and reliable
brain tumor diagnoses as an application of AIoMT.
Related papers
- Easydiagnos: a framework for accurate feature selection for automatic diagnosis in smart healthcare [0.3749861135832073]
This research presents an innovative algorithmic method using the Adaptive Feature Evaluator (AFE) algorithm.
AFE improves feature selection in healthcare datasets and overcomes problems.
Results underscore the transformative potential of AFE in smart healthcare, enabling personalized and transparent patient care.
arXiv Detail & Related papers (2024-10-01T03:28:56Z) - Breast Cancer Diagnosis: A Comprehensive Exploration of Explainable Artificial Intelligence (XAI) Techniques [38.321248253111776]
Article explores the application of Explainable Artificial Intelligence (XAI) techniques in the detection and diagnosis of breast cancer.
Aims to highlight the potential of XAI in bridging the gap between complex AI models and practical healthcare applications.
arXiv Detail & Related papers (2024-06-01T18:50:03Z) - A Survey of Artificial Intelligence in Gait-Based Neurodegenerative Disease Diagnosis [51.07114445705692]
neurodegenerative diseases (NDs) traditionally require extensive healthcare resources and human effort for medical diagnosis and monitoring.
As a crucial disease-related motor symptom, human gait can be exploited to characterize different NDs.
The current advances in artificial intelligence (AI) models enable automatic gait analysis for NDs identification and classification.
arXiv Detail & Related papers (2024-05-21T06:44:40Z) - Enhancing Breast Cancer Diagnosis in Mammography: Evaluation and Integration of Convolutional Neural Networks and Explainable AI [0.0]
The study presents an integrated framework combining Convolutional Neural Networks (CNNs) and Explainable Artificial Intelligence (XAI) for the enhanced diagnosis of breast cancer.
The methodology encompasses an elaborate data preprocessing pipeline and advanced data augmentation techniques to counteract dataset limitations.
A focal point of our study is the evaluation of XAI's effectiveness in interpreting model predictions.
arXiv Detail & Related papers (2024-04-05T05:00:21Z) - From Explainable to Interpretable Deep Learning for Natural Language Processing in Healthcare: How Far from Reality? [8.423877102146433]
"eXplainable and Interpretable Artificial Intelligence" (XIAI) is introduced to distinguish XAI from IAI.
Our analysis shows that attention mechanisms are the most prevalent emerging IAI technique.
The major challenges identified are that most XIAI does not explore "global" modelling processes, the lack of best practices, and the lack of systematic evaluation and benchmarks.
arXiv Detail & Related papers (2024-03-18T15:53:33Z) - Dermatologist-like explainable AI enhances trust and confidence in
diagnosing melanoma [0.0]
A lack of transparency in how artificial intelligence systems identify melanoma poses severe obstacles to user acceptance.
Most XAI methods are unable to produce precisely located domain-specific explanations, making the explanations difficult to interpret.
We developed an XAI system that produces text and region based explanations that are easily interpretable by dermatologists.
arXiv Detail & Related papers (2023-03-17T17:25:55Z) - Robotic Navigation Autonomy for Subretinal Injection via Intelligent
Real-Time Virtual iOCT Volume Slicing [88.99939660183881]
We propose a framework for autonomous robotic navigation for subretinal injection.
Our method consists of an instrument pose estimation method, an online registration between the robotic and the i OCT system, and trajectory planning tailored for navigation to an injection target.
Our experiments on ex-vivo porcine eyes demonstrate the precision and repeatability of the method.
arXiv Detail & Related papers (2023-01-17T21:41:21Z) - AutoPrognosis 2.0: Democratizing Diagnostic and Prognostic Modeling in
Healthcare with Automated Machine Learning [72.2614468437919]
We present a machine learning framework, AutoPrognosis 2.0, to develop diagnostic and prognostic models.
We provide an illustrative application where we construct a prognostic risk score for diabetes using the UK Biobank.
Our risk score has been implemented as a web-based decision support tool and can be publicly accessed by patients and clinicians worldwide.
arXiv Detail & Related papers (2022-10-21T16:31:46Z) - Robust and Efficient Medical Imaging with Self-Supervision [80.62711706785834]
We present REMEDIS, a unified representation learning strategy to improve robustness and data-efficiency of medical imaging AI.
We study a diverse range of medical imaging tasks and simulate three realistic application scenarios using retrospective data.
arXiv Detail & Related papers (2022-05-19T17:34:18Z) - The Medkit-Learn(ing) Environment: Medical Decision Modelling through
Simulation [81.72197368690031]
We present a new benchmarking suite designed specifically for medical sequential decision making.
The Medkit-Learn(ing) Environment is a publicly available Python package providing simple and easy access to high-fidelity synthetic medical data.
arXiv Detail & Related papers (2021-06-08T10:38:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.