A Weakly Supervised Region-Based Active Learning Method for COVID-19
Segmentation in CT Images
- URL: http://arxiv.org/abs/2007.07012v1
- Date: Tue, 7 Jul 2020 16:38:04 GMT
- Title: A Weakly Supervised Region-Based Active Learning Method for COVID-19
Segmentation in CT Images
- Authors: Issam Laradji, Pau Rodriguez, Frederic Branchaud-Charron, Keegan
Lensink, Parmida Atighehchian, William Parker, David Vazquez, and Derek
Nowrouzezahrai
- Abstract summary: labeling CT scans can take a lot of time and effort, with up to 150 minutes per scan.
We introduce a scalable, fast, and accurate active learning system that accelerates the labeling of CT scan images.
- Score: 17.42747482530237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the key challenges in the battle against the Coronavirus (COVID-19)
pandemic is to detect and quantify the severity of the disease in a timely
manner. Computed tomographies (CT) of the lungs are effective for assessing the
state of the infection. Unfortunately, labeling CT scans can take a lot of time
and effort, with up to 150 minutes per scan. We address this challenge
introducing a scalable, fast, and accurate active learning system that
accelerates the labeling of CT scan images. Conventionally, active learning
methods require the labelers to annotate whole images with full supervision,
but that can lead to wasted efforts as many of the annotations could be
redundant. Thus, our system presents the annotator with unlabeled regions that
promise high information content and low annotation cost. Further, the system
allows annotators to label regions using point-level supervision, which is much
cheaper to acquire than per-pixel annotations. Our experiments on open-source
COVID-19 datasets show that using an entropy-based method to rank unlabeled
regions yields to significantly better results than random labeling of these
regions. Also, we show that labeling small regions of images is more efficient
than labeling whole images. Finally, we show that with only 7\% of the labeling
effort required to label the whole training set gives us around 90\% of the
performance obtained by training the model on the fully annotated training set.
Code is available at:
\url{https://github.com/IssamLaradji/covid19_active_learning}.
Related papers
- Localized Region Contrast for Enhancing Self-Supervised Learning in
Medical Image Segmentation [27.82940072548603]
We propose a novel contrastive learning framework that integrates Localized Region Contrast (LRC) to enhance existing self-supervised pre-training methods for medical image segmentation.
Our approach involves identifying Super-pixels by Felzenszwalb's algorithm and performing local contrastive learning using a novel contrastive sampling loss.
arXiv Detail & Related papers (2023-04-06T22:43:13Z) - Weakly Supervised Learning Significantly Reduces the Number of Labels
Required for Intracranial Hemorrhage Detection on Head CT [7.713240800142863]
Machine learning pipelines, in particular those based on deep learning (DL) models, require large amounts of labeled data.
This work studies the question of what kind of labels should be collected for the problem of intracranial hemorrhage detection in brain CT.
We find that strong supervision (i.e., learning with local image-level annotations) and weak supervision (i.e., learning with only global examination-level labels) achieve comparable performance.
arXiv Detail & Related papers (2022-11-29T04:42:41Z) - Pseudo-label Guided Cross-video Pixel Contrast for Robotic Surgical
Scene Segmentation with Limited Annotations [72.15956198507281]
We propose PGV-CL, a novel pseudo-label guided cross-video contrast learning method to boost scene segmentation.
We extensively evaluate our method on a public robotic surgery dataset EndoVis18 and a public cataract dataset CaDIS.
arXiv Detail & Related papers (2022-07-20T05:42:19Z) - Self-Supervised Learning as a Means To Reduce the Need for Labeled Data
in Medical Image Analysis [64.4093648042484]
We use a dataset of chest X-ray images with bounding box labels for 13 different classes of anomalies.
We show that it is possible to achieve similar performance to a fully supervised model in terms of mean average precision and accuracy with only 60% of the labeled data.
arXiv Detail & Related papers (2022-06-01T09:20:30Z) - Robust Medical Image Classification from Noisy Labeled Data with Global
and Local Representation Guided Co-training [73.60883490436956]
We propose a novel collaborative training paradigm with global and local representation learning for robust medical image classification.
We employ the self-ensemble model with a noisy label filter to efficiently select the clean and noisy samples.
We also design a novel global and local representation learning scheme to implicitly regularize the networks to utilize noisy samples.
arXiv Detail & Related papers (2022-05-10T07:50:08Z) - Weakly-supervised Generative Adversarial Networks for medical image
classification [1.479639149658596]
We propose a novel medical image classification algorithm called Weakly-Supervised Generative Adversarial Networks (WSGAN)
WSGAN only uses a small number of real images without labels to generate fake images or mask images to enlarge the sample size of the training set.
We show that WSGAN can obtain relatively high learning performance by using few labeled and unlabeled data.
arXiv Detail & Related papers (2021-11-29T15:38:48Z) - Semi-supervised Contrastive Learning for Label-efficient Medical Image
Segmentation [11.935891325600952]
We propose a supervised local contrastive loss that leverages limited pixel-wise annotation to force pixels with the same label to gather around in the embedding space.
With different amounts of labeled data, our methods consistently outperform the state-of-the-art contrast-based methods and other semi-supervised learning techniques.
arXiv Detail & Related papers (2021-09-15T16:23:48Z) - Mixed Supervision Learning for Whole Slide Image Classification [88.31842052998319]
We propose a mixed supervision learning framework for super high-resolution images.
During the patch training stage, this framework can make use of coarse image-level labels to refine self-supervised learning.
A comprehensive strategy is proposed to suppress pixel-level false positives and false negatives.
arXiv Detail & Related papers (2021-07-02T09:46:06Z) - Grafit: Learning fine-grained image representations with coarse labels [114.17782143848315]
This paper tackles the problem of learning a finer representation than the one provided by training labels.
By jointly leveraging the coarse labels and the underlying fine-grained latent space, it significantly improves the accuracy of category-level retrieval methods.
arXiv Detail & Related papers (2020-11-25T19:06:26Z) - Multi-label Thoracic Disease Image Classification with Cross-Attention
Networks [65.37531731899837]
We propose a novel scheme of Cross-Attention Networks (CAN) for automated thoracic disease classification from chest x-ray images.
We also design a new loss function that beyond cross-entropy loss to help cross-attention process and is able to overcome the imbalance between classes and easy-dominated samples within each class.
arXiv Detail & Related papers (2020-07-21T14:37:00Z) - A Weakly Supervised Consistency-based Learning Method for COVID-19
Segmentation in CT Images [11.778195406694206]
Coronavirus Disease 2019 (COVID-19) has spread aggressively across the world causing an existential health crisis.
A system that automatically detects COVID-19 in tomography (CT) images can assist in quantifying the severity of the illness.
We address these labelling challenges by only requiring point annotations, a single pixel for each infected region on a CT image.
arXiv Detail & Related papers (2020-07-04T20:41:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.