2D and 3D CNN-Based Fusion Approach for COVID-19 Severity Prediction
from 3D CT-Scans
- URL: http://arxiv.org/abs/2303.08740v1
- Date: Wed, 15 Mar 2023 16:27:49 GMT
- Title: 2D and 3D CNN-Based Fusion Approach for COVID-19 Severity Prediction
from 3D CT-Scans
- Authors: Fares Bougourzi and Fadi Dornaika and Amir Nakib and Cosimo Distante
and Abdelmalik Taleb-Ahmed
- Abstract summary: This work is part of the 3nd COV19D competition for Covid-19 Severity Prediction.
We propose hybrid-DeCoVNet architecture which consists of four blocks: Stem, four 3D-ResNet layers, Classification Head and Decision layer.
Our proposed approaches outperformed the baseline approach in the validation data of the 3nd COV19D competition for Covid-19 Severity Prediction by 36%.
- Score: 17.634096977363907
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Since the appearance of Covid-19 in late 2019, Covid-19 has become an active
research topic for the artificial intelligence (AI) community. One of the most
interesting AI topics is Covid-19 analysis of medical imaging. CT-scan imaging
is the most informative tool about this disease. This work is part of the 3nd
COV19D competition for Covid-19 Severity Prediction. In order to deal with the
big gap between the validation and test results that were shown in the previous
version of this competition, we proposed to combine the prediction of 2D and 3D
CNN predictions. For the 2D CNN approach, we propose 2B-InceptResnet
architecture which consists of two paths for segmented lungs and infection of
all slices of the input CT-scan, respectively. Each path consists of ConvLayer
and Inception-ResNet pretrained model on ImageNet. For the 3D CNN approach, we
propose hybrid-DeCoVNet architecture which consists of four blocks: Stem, four
3D-ResNet layers, Classification Head and Decision layer. Our proposed
approaches outperformed the baseline approach in the validation data of the 3nd
COV19D competition for Covid-19 Severity Prediction by 36%.
Related papers
- Ensembling and Test Augmentation for Covid-19 Detection and Covid-19 Domain Adaptation from 3D CT-Scans [14.86694804384387]
This paper contributes to the 4th COV19D competition, focusing on Covid-19 Detection and Covid-19 Adaptation Challenges.
Our approach centers on lung segmentation and Covid-19 infection segmentation.
We employ three 3D CNN backbones Customized Hybrid-DeCoVNet, along with pretrained 3D-Resnet-18 and 3D-Resnet-50 models to train Covid-19 recognition.
arXiv Detail & Related papers (2024-03-17T20:44:38Z) - Invariant Training 2D-3D Joint Hard Samples for Few-Shot Point Cloud
Recognition [108.07591240357306]
We tackle the data scarcity challenge in few-shot point cloud recognition of 3D objects by using a joint prediction from a conventional 3D model and a well-trained 2D model.
We find out the crux is the less effective training for the ''joint hard samples'', which have high confidence prediction on different wrong labels.
Our proposed invariant training strategy, called InvJoint, does not only emphasize the training more on the hard samples, but also seeks the invariance between the conflicting 2D and 3D ambiguous predictions.
arXiv Detail & Related papers (2023-08-18T17:43:12Z) - D-TrAttUnet: Dual-Decoder Transformer-Based Attention Unet Architecture
for Binary and Multi-classes Covid-19 Infection Segmentation [18.231677739397977]
We propose a new Transformer-CNN based approach for Covid-19 infection segmentation from the CT slices.
The Transformer-CNN encoder is built using Transformer layers, UpResBlocks, ResBlocks and max-pooling layers.
The proposed D-TrAttUnet architecture is evaluated for both Binary and Multi-classes Covid-19 infection segmentation.
arXiv Detail & Related papers (2023-03-27T20:05:09Z) - Geometry-Aware Attenuation Learning for Sparse-View CBCT Reconstruction [53.93674177236367]
Cone Beam Computed Tomography (CBCT) plays a vital role in clinical imaging.
Traditional methods typically require hundreds of 2D X-ray projections to reconstruct a high-quality 3D CBCT image.
This has led to a growing interest in sparse-view CBCT reconstruction to reduce radiation doses.
We introduce a novel geometry-aware encoder-decoder framework to solve this problem.
arXiv Detail & Related papers (2023-03-26T14:38:42Z) - Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19 [76.51091445670596]
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
arXiv Detail & Related papers (2022-11-10T13:11:21Z) - Res-Dense Net for 3D Covid Chest CT-scan classification [4.587122314291089]
We propose a method that using a Stacking Deep Neural Network to detect the Covid 19 through the series of 3D CT-scans images.
This method achieves a competitive performance on some evaluation metrics.
arXiv Detail & Related papers (2022-08-09T09:13:00Z) - Ensemble CNN models for Covid-19 Recognition and Severity Perdition From
3D CT-scan [18.231677739397977]
This work is part of the 2nd COV19D competition, where two challenges are set: Covid-19 Detection and Covid-19 Severity Detection from the CT-scans.
For Covid-19 detection from CT-scans, we proposed an ensemble of 2D Convolution blocks with Densenet-161 models.
Our proposed approaches outperformed the baseline approach in the validation data of the 2nd COV19D competition by 11% and 16% for Covid-19 detection and Covid-19 severity detection, respectively.
arXiv Detail & Related papers (2022-06-29T14:20:23Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - Automated Model Design and Benchmarking of 3D Deep Learning Models for
COVID-19 Detection with Chest CT Scans [72.04652116817238]
We propose a differentiable neural architecture search (DNAS) framework to automatically search for the 3D DL models for 3D chest CT scans classification.
We also exploit the Class Activation Mapping (CAM) technique on our models to provide the interpretability of the results.
arXiv Detail & Related papers (2021-01-14T03:45:01Z) - Improving Automated COVID-19 Grading with Convolutional Neural Networks
in Computed Tomography Scans: An Ablation Study [3.072491397378425]
This paper identifies a variety of components that increase the performance of CNN-based algorithms for COVID-19 grading from CT images.
A 3D CNN with these components achieved an area under the ROC curve (AUC) of 0.934 on our test set of 105 CT scans and an AUC of 0.923 on a publicly available set of 742 CT scans.
arXiv Detail & Related papers (2020-09-21T09:58:57Z) - COVID-Net: A Tailored Deep Convolutional Neural Network Design for
Detection of COVID-19 Cases from Chest X-Ray Images [93.0013343535411]
We introduce COVID-Net, a deep convolutional neural network design tailored for the detection of COVID-19 cases from chest X-ray (CXR) images.
To the best of the authors' knowledge, COVID-Net is one of the first open source network designs for COVID-19 detection from CXR images.
We also introduce COVIDx, an open access benchmark dataset that we generated comprising of 13,975 CXR images across 13,870 patient patient cases.
arXiv Detail & Related papers (2020-03-22T12:26:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.