Hybrid Deep Learning for Detecting Lung Diseases from X-ray Images
- URL: http://arxiv.org/abs/2003.00682v3
- Date: Wed, 1 Jul 2020 17:31:27 GMT
- Title: Hybrid Deep Learning for Detecting Lung Diseases from X-ray Images
- Authors: Subrato Bharati, Prajoy Podder, M. Rubaiyat Hossain Mondal
- Abstract summary: We propose a new hybrid deep learning framework by combining VGG, data augmentation and spatial network (STN) with CNN.
VDSNet outperforms existing methods in terms of a number of metrics including precision, recall, F0.5 score and validation accuracy.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lung disease is common throughout the world. These include chronic
obstructive pulmonary disease, pneumonia, asthma, tuberculosis, fibrosis, etc.
Timely diagnosis of lung disease is essential. Many image processing and
machine learning models have been developed for this purpose. Different forms
of existing deep learning techniques including convolutional neural network
(CNN), vanilla neural network, visual geometry group based neural network
(VGG), and capsule network are applied for lung disease prediction.The basic
CNN has poor performance for rotated, tilted, or other abnormal image
orientation. Therefore, we propose a new hybrid deep learning framework by
combining VGG, data augmentation and spatial transformer network (STN) with
CNN. This new hybrid method is termed here as VGG Data STN with CNN (VDSNet).
As implementation tools, Jupyter Notebook, Tensorflow, and Keras are used. The
new model is applied to NIH chest X-ray image dataset collected from Kaggle
repository. Full and sample versions of the dataset are considered. For both
full and sample datasets, VDSNet outperforms existing methods in terms of a
number of metrics including precision, recall, F0.5 score and validation
accuracy. For the case of full dataset, VDSNet exhibits a validation accuracy
of 73%, while vanilla gray, vanilla RGB, hybrid CNN and VGG, and modified
capsule network have accuracy values of 67.8%, 69%, 69.5%, 60.5% and 63.8%,
respectively. When sample dataset rather than full dataset is used, VDSNet
requires much lower training time at the expense of a slightly lower validation
accuracy. Hence, the proposed VDSNet framework will simplify the detection of
lung disease for experts as well as for doctors.
Related papers
- CUDA: Convolution-based Unlearnable Datasets [77.70422525613084]
Large-scale training of modern deep learning models heavily relies on publicly available data on the web.
Recent works aim to make data for deep learning models by adding small, specially designed noises.
These methods are vulnerable to adversarial training (AT) and/or are computationally heavy.
arXiv Detail & Related papers (2023-03-07T22:57:23Z) - CovidExpert: A Triplet Siamese Neural Network framework for the
detection of COVID-19 [0.0]
We develop a few-shot learning model for early detection of COVID-19 to reduce the post-effect of this dangerous disease.
The proposed architecture combines few-shot learning with an ensemble of pre-trained convolutional neural networks.
The suggested model achieved an overall accuracy of 98.719%, a specificity of 99.36%, a sensitivity of 98.72%, and a ROC score of 99.9% with only 200 CT scans per category for training data.
arXiv Detail & Related papers (2023-02-17T17:18:02Z) - InternImage: Exploring Large-Scale Vision Foundation Models with
Deformable Convolutions [95.94629864981091]
This work presents a new large-scale CNN-based foundation model, termed InternImage, which can obtain the gain from increasing parameters and training data like ViTs.
The proposed InternImage reduces the strict inductive bias of traditional CNNs and makes it possible to learn stronger and more robust patterns with large-scale parameters from massive data like ViTs.
arXiv Detail & Related papers (2022-11-10T18:59:04Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - Preservation of High Frequency Content for Deep Learning-Based Medical
Image Classification [74.84221280249876]
An efficient analysis of large amounts of chest radiographs can aid physicians and radiologists.
We propose a novel Discrete Wavelet Transform (DWT)-based method for the efficient identification and encoding of visual information.
arXiv Detail & Related papers (2022-05-08T15:29:54Z) - Evolving Deep Convolutional Neural Network by Hybrid Sine-Cosine and
Extreme Learning Machine for Real-time COVID19 Diagnosis from X-Ray Images [0.5249805590164902]
Deep Convolutional Networks (CNNs) can be considered as applicable tools to diagnose COVID19 positive cases.
This paper proposes using the Extreme Learning Machine (ELM) instead of the last fully connected layer to address this deficiency.
The proposed approach outperforms comparative benchmarks with a final accuracy of 98.83% on the COVID-Xray-5k dataset.
arXiv Detail & Related papers (2021-05-14T19:40:16Z) - Many-to-One Distribution Learning and K-Nearest Neighbor Smoothing for
Thoracic Disease Identification [83.6017225363714]
deep learning has become the most powerful computer-aided diagnosis technology for improving disease identification performance.
For chest X-ray imaging, annotating large-scale data requires professional domain knowledge and is time-consuming.
In this paper, we propose many-to-one distribution learning (MODL) and K-nearest neighbor smoothing (KNNS) methods to improve a single model's disease identification performance.
arXiv Detail & Related papers (2021-02-26T02:29:30Z) - RespireNet: A Deep Neural Network for Accurately Detecting Abnormal Lung
Sounds in Limited Data Setting [9.175146418979324]
We propose a simple CNN-based model, along with novel techniques to efficiently use the small-sized dataset.
We improve upon the state-of-the-art results for 4-class classification by 2.2%.
arXiv Detail & Related papers (2020-10-31T05:53:37Z) - Classification of COVID-19 in CT Scans using Multi-Source Transfer
Learning [91.3755431537592]
We propose the use of Multi-Source Transfer Learning to improve upon traditional Transfer Learning for the classification of COVID-19 from CT scans.
With our multi-source fine-tuning approach, our models outperformed baseline models fine-tuned with ImageNet.
Our best performing model was able to achieve an accuracy of 0.893 and a Recall score of 0.897, outperforming its baseline Recall score by 9.3%.
arXiv Detail & Related papers (2020-09-22T11:53:06Z) - Detection of Coronavirus (COVID-19) Associated Pneumonia based on
Generative Adversarial Networks and a Fine-Tuned Deep Transfer Learning Model
using Chest X-ray Dataset [4.664495510551646]
This paper presents a pneumonia chest x-ray detection based on generative adversarial networks (GAN) with a fine-tuned deep transfer learning for a limited dataset.
The dataset used in this research consists of 5863 X-ray images with two categories: Normal and Pneumonia.
arXiv Detail & Related papers (2020-04-02T08:14:37Z) - Automated Methods for Detection and Classification Pneumonia based on
X-Ray Images Using Deep Learning [0.0]
We show that fine-tuned version of Resnet50, MobileNet_V2 and Inception_Resnet_V2 show highly satisfactory performance with rate of increase in training and validation accuracy (more than 96% of accuracy)
Unlike CNN, Xception, VGG16, VGG19, Inception_V3 and DenseNet201 display low performance (more than 84% accuracy)
arXiv Detail & Related papers (2020-03-31T16:48:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.