SDCT-AuxNet$^{\theta}$: DCT Augmented Stain Deconvolutional CNN with
Auxiliary Classifier for Cancer Diagnosis
- URL: http://arxiv.org/abs/2006.00304v2
- Date: Mon, 8 Jun 2020 01:47:54 GMT
- Title: SDCT-AuxNet$^{\theta}$: DCT Augmented Stain Deconvolutional CNN with
Auxiliary Classifier for Cancer Diagnosis
- Authors: Shiv Gehlot and Anubha Gupta and Ritu Gupta
- Abstract summary: Acute lymphoblastic leukemia (ALL) is a pervasive pediatric white blood cell cancer across the globe.
This paper presents a novel deep learning architecture for the classification of cell images of ALL cancer.
Elaborate experiments have been carried out on our recently released public dataset of 15114 images of ALL cancer and healthy cells.
- Score: 14.567067583556714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Acute lymphoblastic leukemia (ALL) is a pervasive pediatric white blood cell
cancer across the globe. With the popularity of convolutional neural networks
(CNNs), computer-aided diagnosis of cancer has attracted considerable
attention. Such tools are easily deployable and are cost-effective. Hence,
these can enable extensive coverage of cancer diagnostic facilities. However,
the development of such a tool for ALL cancer was challenging so far due to the
non-availability of a large training dataset. The visual similarity between the
malignant and normal cells adds to the complexity of the problem. This paper
discusses the recent release of a large dataset and presents a novel deep
learning architecture for the classification of cell images of ALL cancer. The
proposed architecture, namely, SDCT-AuxNet$^{\theta}$ is a 2-module framework
that utilizes a compact CNN as the main classifier in one module and a Kernel
SVM as the auxiliary classifier in the other one. While CNN classifier uses
features through bilinear-pooling, spectral-averaged features are used by the
auxiliary classifier. Further, this CNN is trained on the stain deconvolved
quantity images in the optical density domain instead of the conventional RGB
images. A novel test strategy is proposed that exploits both the classifiers
for decision making using the confidence scores of their predicted class
labels. Elaborate experiments have been carried out on our recently released
public dataset of 15114 images of ALL cancer and healthy cells to establish the
validity of the proposed methodology that is also robust to subject-level
variability. A weighted F1 score of 94.8$\%$ is obtained that is best so far on
this challenging dataset.
Related papers
- Towards a Benchmark for Colorectal Cancer Segmentation in Endorectal Ultrasound Videos: Dataset and Model Development [59.74920439478643]
In this paper, we collect and annotated the first benchmark dataset that covers diverse ERUS scenarios.
Our ERUS-10K dataset comprises 77 videos and 10,000 high-resolution annotated frames.
We introduce a benchmark model for colorectal cancer segmentation, named the Adaptive Sparse-context TRansformer (ASTR)
arXiv Detail & Related papers (2024-08-19T15:04:42Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Wide & Deep neural network model for patch aggregation in CNN-based
prostate cancer detection systems [51.19354417900591]
Prostate cancer (PCa) is one of the leading causes of death among men, with almost 1.41 million new cases and around 375,000 deaths in 2020.
To perform an automatic diagnosis, prostate tissue samples are first digitized into gigapixel-resolution whole-slide images.
Small subimages called patches are extracted and predicted, obtaining a patch-level classification.
arXiv Detail & Related papers (2021-05-20T18:13:58Z) - Acute Lymphoblastic Leukemia Detection from Microscopic Images Using
Weighted Ensemble of Convolutional Neural Networks [4.095759108304108]
This article has automated the ALL detection task from microscopic cell images, employing deep Convolutional Neural Networks (CNNs)
Various data augmentations and pre-processing are incorporated for achieving a better generalization of the network.
Our proposed weighted ensemble model, using the kappa values of the ensemble candidates as their weights, has outputted a weighted F1-score of 88.6 %, a balanced accuracy of 86.2 %, and an AUC of 0.941 in the preliminary test set.
arXiv Detail & Related papers (2021-05-09T18:58:48Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - Fusion of convolution neural network, support vector machine and Sobel
filter for accurate detection of COVID-19 patients using X-ray images [14.311213877254348]
The coronavirus (COVID-19) is currently the most common contagious disease which is prevalent all over the world.
It is essential to use an automatic diagnosis system along with clinical procedures for the rapid diagnosis of COVID-19 to prevent its spread.
In this study, a fusion of convolutional neural network (CNN), support vector machine (SVM), and Sobel filter is proposed to detect COVID-19 using X-ray images.
arXiv Detail & Related papers (2021-02-13T08:08:36Z) - C-Net: A Reliable Convolutional Neural Network for Biomedical Image
Classification [6.85316573653194]
We propose a novel convolutional neural network (CNN) architecture composed of a Concatenation of multiple Networks, called C-Net, to classify biomedical images.
The C-Net model outperforms all other models on the individual metrics for both datasets and achieves zero misclassification.
arXiv Detail & Related papers (2020-10-30T20:03:20Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - Learning Interpretable Microscopic Features of Tumor by Multi-task
Adversarial CNNs To Improve Generalization [1.7371375427784381]
Existing CNN models act as black boxes, not ensuring to the physicians that important diagnostic features are used by the model.
Here we show that our architecture, by learning end-to-end an uncertainty-based weighting combination of multi-task and adversarial losses, is encouraged to focus on pathology features.
Our results on breast lymph node tissue show significantly improved generalization in the detection of tumorous tissue, with best average AUC 0.89 (0.01) against the baseline AUC 0.86 (0.005)
arXiv Detail & Related papers (2020-08-04T12:10:35Z) - An interpretable classifier for high-resolution breast cancer screening
images utilizing weakly supervised localization [45.00998416720726]
We propose a framework to address the unique properties of medical images.
This model first uses a low-capacity, yet memory-efficient, network on the whole image to identify the most informative regions.
It then applies another higher-capacity network to collect details from chosen regions.
Finally, it employs a fusion module that aggregates global and local information to make a final prediction.
arXiv Detail & Related papers (2020-02-13T15:28:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.