Explainable AI in Diagnosing and Anticipating Leukemia Using Transfer
Learning Method
- URL: http://arxiv.org/abs/2312.00487v1
- Date: Fri, 1 Dec 2023 10:37:02 GMT
- Title: Explainable AI in Diagnosing and Anticipating Leukemia Using Transfer
Learning Method
- Authors: Wahidul Hasan Abir, Md. Fahim Uddin, Faria Rahman Khanam and Mohammad
Monirujjaman Khan
- Abstract summary: This research paper focuses on Acute Lymphoblastic Leukemia (ALL), a form of blood cancer prevalent in children and teenagers.
It proposes an automated detection approach using computer-aided diagnostic (CAD) models, leveraging deep learning techniques.
The proposed method achieved an impressive 98.38% accuracy, outperforming other tested models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This research paper focuses on Acute Lymphoblastic Leukemia (ALL), a form of
blood cancer prevalent in children and teenagers, characterized by the rapid
proliferation of immature white blood cells (WBCs). These atypical cells can
overwhelm healthy cells, leading to severe health consequences. Early and
accurate detection of ALL is vital for effective treatment and improving
survival rates. Traditional diagnostic methods are time-consuming, costly, and
prone to errors. The paper proposes an automated detection approach using
computer-aided diagnostic (CAD) models, leveraging deep learning techniques to
enhance the accuracy and efficiency of leukemia diagnosis. The study utilizes
various transfer learning models like ResNet101V2, VGG19, InceptionV3, and
InceptionResNetV2 for classifying ALL. The methodology includes using the Local
Interpretable Model-Agnostic Explanations (LIME) for ensuring the validity and
reliability of the AI system's predictions. This approach is critical for
overcoming the "black box" nature of AI, where decisions made by models are
often opaque and unaccountable. The paper highlights that the proposed method
using the InceptionV3 model achieved an impressive 98.38% accuracy,
outperforming other tested models. The results, verified by the LIME algorithm,
showcase the potential of this method in accurately identifying ALL, providing
a valuable tool for medical practitioners. The research underscores the impact
of explainable artificial intelligence (XAI) in medical diagnostics, paving the
way for more transparent and trustworthy AI applications in healthcare.
Related papers
- Explainable Diagnosis Prediction through Neuro-Symbolic Integration [11.842565087408449]
We use neuro-symbolic methods, specifically Logical Neural Networks (LNNs), to develop explainable models for diagnosis prediction.
Our models, particularly $M_textmulti-pathway$ and $M_textcomprehensive$, demonstrate superior performance over traditional models.
These findings highlight the potential of neuro-symbolic approaches in bridging the gap between accuracy and explainability in healthcare AI applications.
arXiv Detail & Related papers (2024-10-01T22:47:24Z) - A study on deep feature extraction to detect and classify Acute Lymphoblastic Leukemia (ALL) [0.0]
Acute lymphoblastic leukaemia (ALL) is a blood malignancy that mainly affects adults and children.
This study looks into the use of deep learning, specifically Convolutional Neural Networks (CNNs) for the detection and classification of ALL.
With an 87% accuracy rate, the ResNet101 model produced the best results, closely followed by DenseNet121 and VGG19.
arXiv Detail & Related papers (2024-09-10T17:53:29Z) - Analysis of Modern Computer Vision Models for Blood Cell Classification [49.1574468325115]
This study uses state-of-the-art architectures, including MaxVit, EfficientVit, EfficientNet, EfficientNetV2, and MobileNetV3 to achieve rapid and accurate results.
Our approach not only addresses the speed and accuracy concerns of traditional techniques but also explores the applicability of innovative deep learning models in hematological analysis.
arXiv Detail & Related papers (2024-06-30T16:49:29Z) - Breast Cancer Diagnosis: A Comprehensive Exploration of Explainable Artificial Intelligence (XAI) Techniques [38.321248253111776]
Article explores the application of Explainable Artificial Intelligence (XAI) techniques in the detection and diagnosis of breast cancer.
Aims to highlight the potential of XAI in bridging the gap between complex AI models and practical healthcare applications.
arXiv Detail & Related papers (2024-06-01T18:50:03Z) - Neural Cellular Automata for Lightweight, Robust and Explainable Classification of White Blood Cell Images [40.347953893940044]
We introduce a novel approach for white blood cell classification based on neural cellular automata (NCA)
Our NCA-based method is significantly smaller in terms of parameters and exhibits robustness to domain shifts.
Our results demonstrate that NCA can be used for image classification, and they address key challenges of conventional methods.
arXiv Detail & Related papers (2024-04-08T14:59:53Z) - An Interpretable Deep Learning Approach for Skin Cancer Categorization [0.0]
We use modern deep learning methods and explainable artificial intelligence (XAI) approaches to address the problem of skin cancer detection.
To categorize skin lesions, we employ four cutting-edge pre-trained models: XceptionNet, EfficientNetV2S, InceptionResNetV2, and EfficientNetV2M.
Our study shows how deep learning and explainable artificial intelligence (XAI) can improve skin cancer diagnosis.
arXiv Detail & Related papers (2023-12-17T12:11:38Z) - The Limits of Fair Medical Imaging AI In The Wild [43.97266228706059]
We investigate the extent to which medical AI utilizes demographic encodings.
We confirm that medical imaging AI leverages demographic shortcuts in disease classification.
We find that models with less encoding of demographic attributes are often most "globally optimal"
arXiv Detail & Related papers (2023-12-11T18:59:50Z) - GENIE-NF-AI: Identifying Neurofibromatosis Tumors using Liquid Neural
Network (LTC) trained on AACR GENIE Datasets [0.0]
We propose an interpretable AI approach to diagnose patients with neurofibromatosis.
Our proposed approach outperformed existing models with 99.86% accuracy.
arXiv Detail & Related papers (2023-04-26T10:28:59Z) - Detecting Shortcut Learning for Fair Medical AI using Shortcut Testing [62.9062883851246]
Machine learning holds great promise for improving healthcare, but it is critical to ensure that its use will not propagate or amplify health disparities.
One potential driver of algorithmic unfairness, shortcut learning, arises when ML models base predictions on improper correlations in the training data.
Using multi-task learning, we propose the first method to assess and mitigate shortcut learning as a part of the fairness assessment of clinical ML systems.
arXiv Detail & Related papers (2022-07-21T09:35:38Z) - SSD-KD: A Self-supervised Diverse Knowledge Distillation Method for
Lightweight Skin Lesion Classification Using Dermoscopic Images [62.60956024215873]
Skin cancer is one of the most common types of malignancy, affecting a large population and causing a heavy economic burden worldwide.
Most studies in skin cancer detection keep pursuing high prediction accuracies without considering the limitation of computing resources on portable devices.
This study specifically proposes a novel method, termed SSD-KD, that unifies diverse knowledge into a generic KD framework for skin diseases classification.
arXiv Detail & Related papers (2022-03-22T06:54:29Z) - Demystifying Deep Learning Models for Retinal OCT Disease Classification
using Explainable AI [0.6117371161379209]
The adoption of various deep learning techniques is quite common as well as effective, and its statement is equally true when it comes to implementing it into the retina Optical Coherence Tomography sector.
These techniques have the black box characteristics that prevent the medical professionals to completely trust the results generated from them.
This paper proposes a self-developed CNN model which is comparatively smaller and simpler along with the use of Lime that introduces Explainable AI to the study.
arXiv Detail & Related papers (2021-11-06T13:54:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.