High-confidence pseudo-labels for domain adaptation in COVID-19 detection
- URL: http://arxiv.org/abs/2403.13509v1
- Date: Wed, 20 Mar 2024 11:12:57 GMT
- Title: High-confidence pseudo-labels for domain adaptation in COVID-19 detection
- Authors: Robert Turnbull, Simon Mutch,
- Abstract summary: This paper outlines our submission for the 4th COV19D competition.
The competition consists of two challenges.
The first is to train a classifier to detect the presence of COVID-19 from over one thousand CT scans from the COV19-CT-DB database.
- Score: 8.28720658988688
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper outlines our submission for the 4th COV19D competition as part of the `Domain adaptation, Explainability, Fairness in AI for Medical Image Analysis' (DEF-AI-MIA) workshop at the Computer Vision and Pattern Recognition Conference (CVPR). The competition consists of two challenges. The first is to train a classifier to detect the presence of COVID-19 from over one thousand CT scans from the COV19-CT-DB database. The second challenge is to perform domain adaptation by taking the dataset from Challenge 1 and adding a small number of scans (some annotated and other not) for a different distribution. We preprocessed the CT scans to segment the lungs, and output volumes with the lungs individually and together. We then trained 3D ResNet and Swin Transformer models on these inputs. We annotated the unlabeled CT scans using an ensemble of these models and chose the high-confidence predictions as pseudo-labels for fine-tuning. This resulted in a best cross-validation mean F1 score of 93.39\% for Challenge 1 and a mean F1 score of 92.15 for Challenge 2.
Related papers
- Domain adaptation, Explainability & Fairness in AI for Medical Image
Analysis: Diagnosis of COVID-19 based on 3-D Chest CT-scans [19.84888289470376]
The paper presents the DEF-AI-MIA COV19D Competition.
The Competition is the 4th in the series, following the first three Competitions held in the framework of ICCV 2021, ECCV 2022 and ICASSP 2023.
The paper presents the baseline models used in the Challenges and the performance which was obtained respectively.
arXiv Detail & Related papers (2024-03-04T16:31:58Z) - COVID-19 detection using ViT transformer-based approach from Computed
Tomography Images [0.0]
We introduce a novel approach to enhance the accuracy and efficiency of COVID-19 diagnosis using CT images.
We employ the base ViT Transformer configured for 224x224-sized input images, modifying the output to suit the binary classification task.
Our method implements a systematic patient-level prediction strategy, classifying individual CT slices as COVID-19 or non-COVID.
arXiv Detail & Related papers (2023-10-12T09:37:56Z) - Dual Multi-scale Mean Teacher Network for Semi-supervised Infection
Segmentation in Chest CT Volume for COVID-19 [76.51091445670596]
Automated detecting lung infections from computed tomography (CT) data plays an important role for combating COVID-19.
Most current COVID-19 infection segmentation methods mainly relied on 2D CT images, which lack 3D sequential constraint.
Existing 3D CT segmentation methods focus on single-scale representations, which do not achieve the multiple level receptive field sizes on 3D volume.
arXiv Detail & Related papers (2022-11-10T13:11:21Z) - FetReg2021: A Challenge on Placental Vessel Segmentation and
Registration in Fetoscopy [52.3219875147181]
Fetoscopic laser photocoagulation is a widely adopted procedure for treating Twin-to-Twin Transfusion Syndrome (TTTS)
The procedure is particularly challenging due to the limited field of view, poor manoeuvrability of the fetoscope, poor visibility, and variability in illumination.
Computer-assisted intervention (CAI) can provide surgeons with decision support and context awareness by identifying key structures in the scene and expanding the fetoscopic field of view through video mosaicking.
Seven teams participated in this challenge and their model performance was assessed on an unseen test dataset of 658 pixel-annotated images from 6 fet
arXiv Detail & Related papers (2022-06-24T23:44:42Z) - CNN Filter Learning from Drawn Markers for the Detection of Suggestive
Signs of COVID-19 in CT Images [58.720142291102135]
We propose a method that does not require either large annotated datasets or backpropagation to estimate the filters of a convolutional neural network (CNN)
For a few CT images, the user draws markers at representative normal and abnormal regions.
The method generates a feature extractor composed of a sequence of convolutional layers, whose kernels are specialized in enhancing regions similar to the marked ones.
arXiv Detail & Related papers (2021-11-16T15:03:42Z) - Cross-Site Severity Assessment of COVID-19 from CT Images via Domain
Adaptation [64.59521853145368]
Early and accurate severity assessment of Coronavirus disease 2019 (COVID-19) based on computed tomography (CT) images offers a great help to the estimation of intensive care unit event.
To augment the labeled data and improve the generalization ability of the classification model, it is necessary to aggregate data from multiple sites.
This task faces several challenges including class imbalance between mild and severe infections, domain distribution discrepancy between sites, and presence of heterogeneous features.
arXiv Detail & Related papers (2021-09-08T07:56:51Z) - Rotation Invariance and Extensive Data Augmentation: a strategy for the
Mitosis Domain Generalization (MIDOG) Challenge [1.52292571922932]
We present the strategy we applied to participate in the MIDOG 2021 competition.
The purpose of the competition was to evaluate the generalization of solutions to images acquired with unseen target scanners.
We propose a solution based on a combination of state-of-the-art deep learning methods.
arXiv Detail & Related papers (2021-09-02T10:09:02Z) - COVID-VIT: Classification of COVID-19 from CT chest images based on
vision transformer models [0.8594140167290099]
This paper is responding to the MIA-COV19 challenge to classify COVID from non-COVID based on CT lung images.
The overarching aim is to predict the diagnosis of the COVID-19 virus from chest radiographs.
Two deep learning methods are studied, which are vision transformer (ViT) based on attention models and DenseNet that is built upon conventional convolutional neural network (CNN)
arXiv Detail & Related papers (2021-07-04T16:55:51Z) - Dual-Consistency Semi-Supervised Learning with Uncertainty
Quantification for COVID-19 Lesion Segmentation from CT Images [49.1861463923357]
We propose an uncertainty-guided dual-consistency learning network (UDC-Net) for semi-supervised COVID-19 lesion segmentation from CT images.
Our proposed UDC-Net improves the fully supervised method by 6.3% in Dice and outperforms other competitive semi-supervised approaches by significant margins.
arXiv Detail & Related papers (2021-04-07T16:23:35Z) - Multi-Task Driven Explainable Diagnosis of COVID-19 using Chest X-ray
Images [61.24431480245932]
COVID-19 Multi-Task Network is an automated end-to-end network for COVID-19 screening.
We manually annotate the lung regions of 9000 frontal chest radiographs taken from ChestXray-14, CheXpert and a consolidated COVID-19 dataset.
This database will be released to the research community.
arXiv Detail & Related papers (2020-08-03T12:52:23Z) - Automated Chest CT Image Segmentation of COVID-19 Lung Infection based
on 3D U-Net [0.0]
The coronavirus disease 2019 (COVID-19) affects billions of lives around the world and has a significant impact on public healthcare.
We propose an innovative automated segmentation pipeline for COVID-19 infected regions.
Our method focuses on on-the-fly generation of unique and random image patches for training by performing several preprocessing methods.
arXiv Detail & Related papers (2020-06-24T17:29:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.