COVID-19 Detection Using Segmentation, Region Extraction and
Classification Pipeline
- URL: http://arxiv.org/abs/2210.02992v4
- Date: Wed, 29 Mar 2023 09:18:05 GMT
- Title: COVID-19 Detection Using Segmentation, Region Extraction and
Classification Pipeline
- Authors: Kenan Morani
- Abstract summary: The main purpose of this study is to develop a pipeline for COVID-19 detection from a big and challenging database of CT images.
The methodologies tried in the segmentation part are traditional segmentation methods as well as UNet-based methods.
In the classification part, a Conal Neural Network (CNN) was used to take the final diagnosis decisions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The main purpose of this study is to develop a pipeline for COVID-19
detection from a big and challenging database of Computed Tomography (CT)
images. The proposed pipeline includes a segmentation part, a lung extraction
part, and a classifier part. Optional slice removal techniques after UNet-based
segmentation of slices were also tried. The methodologies tried in the
segmentation part are traditional segmentation methods as well as UNet-based
methods. In the classification part, a Convolutional Neural Network (CNN) was
used to take the final diagnosis decisions. In terms of the results: in the
segmentation part, the proposed segmentation methods show high dice scores on a
publicly available dataset. In the classification part, the results were
compared at slice-level and at patient-level as well. At slice-level, methods
were compared and showed high validation accuracy indicating efficiency in
predicting 2D slices. At patient level, the proposed methods were also compared
in terms of validation accuracy and macro F1 score on the validation set. The
dataset used for classification is COV-19CT Database. The method proposed here
showed improvement from our precious results on the same dataset. In
Conclusion, the improved work in this paper has potential clinical usages for
COVID-19 detection and diagnosis via CT images. The code is on github at
https://github.com/IDU-CVLab/COV19D_3rd
Related papers
- CT-xCOV: a CT-scan based Explainable Framework for COVid-19 diagnosis [6.2997667081978825]
CT-xCOV is an explainable framework for COVID-19 diagnosis using Deep Learning (DL) on CT-scans.
For lung segmentation, we used the well-known U-Net model. For COVID-19 detection, we compared three different CNN architectures.
For visual explanations, we applied three different XAI techniques, namely, Grad-Cam, Integrated Gradient (IG) and LIME.
arXiv Detail & Related papers (2023-11-24T13:14:10Z) - PCA: Semi-supervised Segmentation with Patch Confidence Adversarial
Training [52.895952593202054]
We propose a new semi-supervised adversarial method called Patch Confidence Adrial Training (PCA) for medical image segmentation.
PCA learns the pixel structure and context information in each patch to get enough gradient feedback, which aids the discriminator in convergent to an optimal state.
Our method outperforms the state-of-the-art semi-supervised methods, which demonstrates its effectiveness for medical image segmentation.
arXiv Detail & Related papers (2022-07-24T07:45:47Z) - A Deep Ensemble Learning Approach to Lung CT Segmentation for COVID-19
Severity Assessment [0.5512295869673147]
We present a novel deep learning approach to categorical segmentation of lung CTs of COVID-19 patients.
We partition the scans into healthy lung tissues, non-lung regions, and two different, yet visually similar, pathological lung tissues.
The proposed framework achieves competitive results and outstanding generalization capabilities for three COVID-19 datasets.
arXiv Detail & Related papers (2022-07-05T21:28:52Z) - Improving Classification Model Performance on Chest X-Rays through Lung
Segmentation [63.45024974079371]
We propose a deep learning approach to enhance abnormal chest x-ray (CXR) identification performance through segmentations.
Our approach is designed in a cascaded manner and incorporates two modules: a deep neural network with criss-cross attention modules (XLSor) for localizing lung region in CXR images and a CXR classification model with a backbone of a self-supervised momentum contrast (MoCo) model pre-trained on large-scale CXR data sets.
arXiv Detail & Related papers (2022-02-22T15:24:06Z) - Dense Pixel-Labeling for Reverse-Transfer and Diagnostic Learning on
Lung Ultrasound for COVID-19 and Pneumonia Detection [0.039025665763971464]
We present an architecture to convert segmentation models to classification models.
We compare and contrast dense vs sparse segmentation labeling and study its impact on diagnostic classification.
arXiv Detail & Related papers (2022-01-25T08:19:11Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - CvS: Classification via Segmentation For Small Datasets [52.821178654631254]
This paper presents CvS, a cost-effective classifier for small datasets that derives the classification labels from predicting the segmentation maps.
We evaluate the effectiveness of our framework on diverse problems showing that CvS is able to achieve much higher classification results compared to previous methods when given only a handful of examples.
arXiv Detail & Related papers (2021-10-29T18:41:15Z) - Automatic CT Segmentation from Bounding Box Annotations using
Convolutional Neural Networks [2.554905387213585]
The proposed method is composed of two steps: 1) generating pseudo masks with bounding box annotations by k-means clustering, and 2) iteratively training a 3D U-Net convolutional neural network as a segmentation model.
For liver, spleen and kidney segmentation, it achieved an accuracy of 95.19%, 92.11%, and 91.45%, respectively.
arXiv Detail & Related papers (2021-05-29T14:48:16Z) - Learning Fuzzy Clustering for SPECT/CT Segmentation via Convolutional
Neural Networks [5.3123694982708365]
Quantitative bone single-photon emission computed tomography (QBSPECT) has the potential to provide a better quantitative assessment of bone metastasis than planar bone scintigraphy.
The segmentation of anatomical regions-of-interests (ROIs) still relies heavily on the manual delineation by experts.
This work proposes a fast and robust automated segmentation method for partitioning a QBSPECT image into lesion, bone, and background.
arXiv Detail & Related papers (2021-04-17T19:03:52Z) - An Uncertainty-Driven GCN Refinement Strategy for Organ Segmentation [53.425900196763756]
We propose a segmentation refinement method based on uncertainty analysis and graph convolutional networks.
We employ the uncertainty levels of the convolutional network in a particular input volume to formulate a semi-supervised graph learning problem.
We show that our method outperforms the state-of-the-art CRF refinement method by improving the dice score by 1% for the pancreas and 2% for spleen.
arXiv Detail & Related papers (2020-12-06T18:55:07Z) - 3D medical image segmentation with labeled and unlabeled data using
autoencoders at the example of liver segmentation in CT images [58.720142291102135]
This work investigates the potential of autoencoder-extracted features to improve segmentation with a convolutional neural network.
A convolutional autoencoder was used to extract features from unlabeled data and a multi-scale, fully convolutional CNN was used to perform the target task of 3D liver segmentation in CT images.
arXiv Detail & Related papers (2020-03-17T20:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.