Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19
- URL: http://arxiv.org/abs/2211.05548v1
- Date: Thu, 10 Nov 2022 13:11:21 GMT
- Title: Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19
- Authors: Liansheng Wang, Jiacheng Wang, Lei Zhu, Huazhu Fu, Ping Li, Gary
Cheng, Zhipeng Feng, Shuo Li, and Pheng-Ann Heng
- Abstract summary: Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
- Score: 76.51091445670596
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated detecting lung infections from computed tomography (CT) data plays
an important role for combating COVID-19. However, there are still some
challenges for developing AI system. 1) Most current COVID-19 infection
segmentation methods mainly relied on 2D CT images, which lack 3D sequential
constraint. 2) Existing 3D CT segmentation methods focus on single-scale
representations, which do not achieve the multiple level receptive field sizes
on 3D volume. 3) The emergent breaking out of COVID-19 makes it hard to
annotate sufficient CT volumes for training deep model. To address these
issues, we first build a multiple dimensional-attention convolutional neural
network (MDA-CNN) to aggregate multi-scale information along different
dimension of input feature maps and impose supervision on multiple predictions
from different CNN layers. Second, we assign this MDA-CNN as a basic network
into a novel dual multi-scale mean teacher network (DM${^2}$T-Net) for
semi-supervised COVID-19 lung infection segmentation on CT volumes by
leveraging unlabeled data and exploring the multi-scale information. Our
DM${^2}$T-Net encourages multiple predictions at different CNN layers from the
student and teacher networks to be consistent for computing a multi-scale
consistency loss on unlabeled data, which is then added to the supervised loss
on the labeled data from multiple predictions of MDA-CNN. Third, we collect two
COVID-19 segmentation datasets to evaluate our method. The experimental results
show that our network consistently outperforms the compared state-of-the-art
methods.
Related papers
- Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - Multi-Slice Dense-Sparse Learning for Efficient Liver and Tumor
Segmentation [4.150096314396549]
Deep convolutional neural network (DCNNs) has obtained tremendous success in 2D and 3D medical image segmentation.
We propose a novel dense-sparse training flow from a data perspective, in which, densely adjacent slices and sparsely adjacent slices are extracted as inputs for regularizing DCNNs.
We also design a 2.5D light-weight nnU-Net from a network perspective, in which, depthwise separable convolutions are adopted to improve the efficiency.
arXiv Detail & Related papers (2021-08-15T15:29:48Z) - COVID-19 identification from volumetric chest CT scans using a
progressively resized 3D-CNN incorporating segmentation, augmentation, and
class-rebalancing [4.446085353384894]
COVID-19 is a global pandemic disease overgrowing worldwide.
Computer-aided screening tools with greater sensitivity is imperative for disease diagnosis and prognosis.
This article proposes a 3D Convolutional Neural Network (CNN)-based classification approach.
arXiv Detail & Related papers (2021-02-11T18:16:18Z) - HIVE-Net: Centerline-Aware HIerarchical View-Ensemble Convolutional
Network for Mitochondria Segmentation in EM Images [3.1498833540989413]
We introduce a novel hierarchical view-ensemble convolution (HVEC) to learn 3D spatial contexts using more efficient 2D convolutions.
The proposed method performs favorably against the state-of-the-art methods in accuracy and visual quality but with a greatly reduced model size.
arXiv Detail & Related papers (2021-01-08T06:56:40Z) - TSGCNet: Discriminative Geometric Feature Learning with Two-Stream
GraphConvolutional Network for 3D Dental Model Segmentation [141.2690520327948]
We propose a two-stream graph convolutional network (TSGCNet) to learn multi-view information from different geometric attributes.
We evaluate our proposed TSGCNet on a real-patient dataset of dental models acquired by 3D intraoral scanners.
arXiv Detail & Related papers (2020-12-26T08:02:56Z) - Improving Automated COVID-19 Grading with Convolutional Neural Networks
in Computed Tomography Scans: An Ablation Study [3.072491397378425]
This paper identifies a variety of components that increase the performance of CNN-based algorithms for COVID-19 grading from CT images.
A 3D CNN with these components achieved an area under the ROC curve (AUC) of 0.934 on our test set of 105 CT scans and an AUC of 0.923 on a publicly available set of 742 CT scans.
arXiv Detail & Related papers (2020-09-21T09:58:57Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.