ALLNet: A Hybrid Convolutional Neural Network to Improve Diagnosis of
Acute Lymphocytic Leukemia (ALL) in White Blood Cells
- URL: http://arxiv.org/abs/2108.08195v1
- Date: Wed, 18 Aug 2021 15:24:53 GMT
- Title: ALLNet: A Hybrid Convolutional Neural Network to Improve Diagnosis of
Acute Lymphocytic Leukemia (ALL) in White Blood Cells
- Authors: Sai Mattapalli, Rishi Athavale
- Abstract summary: The ALL Challenge dataset contains 10,691 images of white blood cells which were used to train and test the models.
AllNet, the proposed hybrid convolutional neural network architecture, consists of a combination of the VGG, ResNet, and Inception models.
In the test set, ALLNet achieved an accuracy of 92.6567%, a sensitivity of 95.5304%, a specificity of 85.9155%, an AUC score of 0.966347, and an F1 score of 0.94803.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Due to morphological similarity at the microscopic level, making an accurate
and time-sensitive distinction between blood cells affected by Acute
Lymphocytic Leukemia (ALL) and their healthy counterparts calls for the usage
of machine learning architectures. However, three of the most common models,
VGG, ResNet, and Inception, each come with their own set of flaws with room for
improvement which demands the need for a superior model. ALLNet, the proposed
hybrid convolutional neural network architecture, consists of a combination of
the VGG, ResNet, and Inception models. The ALL Challenge dataset of ISBI 2019
(available here) contains 10,691 images of white blood cells which were used to
train and test the models. 7,272 of the images in the dataset are of cells with
ALL and 3,419 of them are of healthy cells. Of the images, 60% were used to
train the model, 20% were used for the cross-validation set, and 20% were used
for the test set. ALLNet outperformed the VGG, ResNet, and the Inception models
across the board, achieving an accuracy of 92.6567%, a sensitivity of 95.5304%,
a specificity of 85.9155%, an AUC score of 0.966347, and an F1 score of 0.94803
in the cross-validation set. In the test set, ALLNet achieved an accuracy of
92.0991%, a sensitivity of 96.5446%, a specificity of 82.8035%, an AUC score of
0.959972, and an F1 score of 0.942963. The utilization of ALLNet in the
clinical workspace can better treat the thousands of people suffering from ALL
across the world, many of whom are children.
Related papers
- Enhancing Diabetic Retinopathy Classification Accuracy through Dual Attention Mechanism in Deep Learning [2.856144231792095]
In this work, we combine global attention block (GAB) and category attention block (CAB) into the deep learning model.<n>Our proposed approach is based on an attention mechanism-based deep learning model that employs three pre-trained networks.<n>The proposed approach achieves competitive performance that is at par with recently reported works on DR classification.
arXiv Detail & Related papers (2025-07-25T12:09:27Z) - Brain Tumor Classification on MRI in Light of Molecular Markers [61.77272414423481]
Co-deletion of the 1p/19q gene is associated with clinical outcomes in low-grade gliomas.
This study aims to utilize a specially MRI-based convolutional neural network for brain cancer detection.
arXiv Detail & Related papers (2024-09-29T07:04:26Z) - Automatic Classification of White Blood Cell Images using Convolutional Neural Network [0.0]
Human immune system contains white blood cells (WBC) that are good indicator of many diseases like bacterial infections, AIDS, cancer, spleen, etc.
Traditionally in laboratories, pathologists and hematologists analyze these blood cells through microscope and then classify them manually.
In this paper, first we have used different CNN pre-train models such as ResNet-50, InceptionV3, VGG16 and MobileNetV2 to automatically classify the white blood cells.
Inspired by these architectures, a framework has been proposed to automatically categorize the four kinds of white blood cells with increased accuracy.
arXiv Detail & Related papers (2024-09-19T16:39:46Z) - Comparative Performance Analysis of Transformer-Based Pre-Trained Models for Detecting Keratoconus Disease [0.0]
This study compares eight pre-trained CNNs for diagnosing keratoconus, a degenerative eye disease.
MobileNetV2 was the best accurate model in identifying keratoconus and normal cases with few misclassifications.
arXiv Detail & Related papers (2024-08-16T20:15:24Z) - Deep Generative Classification of Blood Cell Morphology [7.494975467007647]
We introduce CytoDiffusion, a diffusion-based classifier that effectively models blood cell morphology.
Our approach outperforms state-of-the-art discriminative models in anomaly detection.
We enhance model explainability through the generation of directly interpretable counterfactual heatmaps.
arXiv Detail & Related papers (2024-08-16T19:17:02Z) - BloodCell-Net: A lightweight convolutional neural network for the classification of all microscopic blood cell images of the human body [0.0]
Blood cell classification and counting is vital for the diagnosis of various blood-related diseases.
We have proposed a DL based automated system for blood cell classification and counting from microscopic blood smear images.
We classify total of nine types of blood cells, including Erythrocyte, Erythroblast, Neutrophil, Basophil, Eosinophil, Lymphocyte, Monocyte, Immature Granulocytes, and Platelet.
arXiv Detail & Related papers (2024-04-01T20:38:58Z) - Attention-based Saliency Maps Improve Interpretability of Pneumothorax
Classification [52.77024349608834]
To investigate chest radiograph (CXR) classification performance of vision transformers (ViT) and interpretability of attention-based saliency.
ViTs were fine-tuned for lung disease classification using four public data sets: CheXpert, Chest X-Ray 14, MIMIC CXR, and VinBigData.
ViTs had comparable CXR classification AUCs compared with state-of-the-art CNNs.
arXiv Detail & Related papers (2023-03-03T12:05:41Z) - Osteoporosis Prescreening using Panoramic Radiographs through a Deep
Convolutional Neural Network with Attention Mechanism [65.70943212672023]
Deep convolutional neural network (CNN) with an attention module can detect osteoporosis on panoramic radiographs.
dataset of 70 panoramic radiographs (PRs) from 70 different subjects of age between 49 to 60 was used.
arXiv Detail & Related papers (2021-10-19T00:03:57Z) - The Report on China-Spain Joint Clinical Testing for Rapid COVID-19 Risk
Screening by Eye-region Manifestations [59.48245489413308]
We developed and tested a COVID-19 rapid prescreening model using the eye-region images captured in China and Spain with cellphone cameras.
The performance was measured using area under receiver-operating-characteristic curve (AUC), sensitivity, specificity, accuracy, and F1.
arXiv Detail & Related papers (2021-09-18T02:28:01Z) - Vision Transformers for femur fracture classification [59.99241204074268]
The Vision Transformer (ViT) was able to correctly predict 83% of the test images.
Good results were obtained in sub-fractures with the largest and richest dataset ever.
arXiv Detail & Related papers (2021-08-07T10:12:42Z) - Acute Lymphoblastic Leukemia Detection from Microscopic Images Using
Weighted Ensemble of Convolutional Neural Networks [4.095759108304108]
This article has automated the ALL detection task from microscopic cell images, employing deep Convolutional Neural Networks (CNNs)
Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network.
Our proposed weighted ensemble model, using the kappa values of the ensemble candidates as their weights, has outputted a weighted F1-score of 88.6 %, a balanced accuracy of 86.2 %, and an AUC of 0.941 in the preliminary test set.
arXiv Detail & Related papers (2021-05-09T18:58:48Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.