TAAL: Test-time Augmentation for Active Learning in Medical Image
Segmentation
- URL: http://arxiv.org/abs/2301.06624v1
- Date: Mon, 16 Jan 2023 22:19:41 GMT
- Title: TAAL: Test-time Augmentation for Active Learning in Medical Image
Segmentation
- Authors: M\'elanie Gaillochet, Christian Desrosiers, and Herv\'e Lombaert
- Abstract summary: This paper proposes Test-time Augmentation for Active Learning (TAAL), a novel semi-supervised active learning approach for segmentation.
Our results on a publicly-available dataset of cardiac images show that TAAL outperforms existing baseline methods in both fully-supervised and semi-supervised settings.
- Score: 7.856339385917824
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning methods typically depend on the availability of labeled data,
which is expensive and time-consuming to obtain. Active learning addresses such
effort by prioritizing which samples are best to annotate in order to maximize
the performance of the task model. While frameworks for active learning have
been widely explored in the context of classification of natural images, they
have been only sparsely used in medical image segmentation. The challenge
resides in obtaining an uncertainty measure that reveals the best candidate
data for annotation. This paper proposes Test-time Augmentation for Active
Learning (TAAL), a novel semi-supervised active learning approach for
segmentation that exploits the uncertainty information offered by data
transformations. Our method applies cross-augmentation consistency during
training and inference to both improve model learning in a semi-supervised
fashion and identify the most relevant unlabeled samples to annotate next. In
addition, our consistency loss uses a modified version of the JSD to further
improve model performance. By relying on data transformations rather than on
external modules or simple heuristics typically used in uncertainty-based
strategies, TAAL emerges as a simple, yet powerful task-agnostic
semi-supervised active learning approach applicable to the medical domain. Our
results on a publicly-available dataset of cardiac images show that TAAL
outperforms existing baseline methods in both fully-supervised and
semi-supervised settings. Our implementation is publicly available on
https://github.com/melinphd/TAAL.
Related papers
- SS-ADA: A Semi-Supervised Active Domain Adaptation Framework for Semantic Segmentation [25.929173344653158]
We propose a novel semi-supervised active domain adaptation (SS-ADA) framework for semantic segmentation.
SS-ADA integrates active learning into semi-supervised semantic segmentation to achieve the accuracy of supervised learning.
We conducted extensive experiments on synthetic-to-real and real-to-real domain adaptation settings.
arXiv Detail & Related papers (2024-06-17T13:40:42Z) - Combating Missing Modalities in Egocentric Videos at Test Time [92.38662956154256]
Real-world applications often face challenges with incomplete modalities due to privacy concerns, efficiency needs, or hardware issues.
We propose a novel approach to address this issue at test time without requiring retraining.
MiDl represents the first self-supervised, online solution for handling missing modalities exclusively at test time.
arXiv Detail & Related papers (2024-04-23T16:01:33Z) - Test-Time Training for Semantic Segmentation with Output Contrastive
Loss [12.535720010867538]
Deep learning-based segmentation models have achieved impressive performance on public benchmarks, but generalizing well to unseen environments remains a major challenge.
This paper introduces Contrastive Loss (OCL), known for its capability to learn robust and generalized representations, to stabilize the adaptation process.
Our method excels even when applied to models initially pre-trained using domain adaptation methods on test domain data, showcasing its resilience and adaptability.
arXiv Detail & Related papers (2023-11-14T03:13:47Z) - Active learning for medical image segmentation with stochastic batches [13.171801108109198]
To reduce manual labelling, active learning (AL) targets the most informative samples from the unlabelled set to annotate and add to the labelled training set.
This work aims to take advantage of the diversity and speed offered by random sampling to improve the selection of uncertainty-based AL methods for segmenting medical images.
arXiv Detail & Related papers (2023-01-18T17:25:55Z) - Temporal Output Discrepancy for Loss Estimation-based Active Learning [65.93767110342502]
We present a novel deep active learning approach that queries the oracle for data annotation when the unlabeled sample is believed to incorporate high loss.
Our approach achieves superior performances than the state-of-the-art active learning methods on image classification and semantic segmentation tasks.
arXiv Detail & Related papers (2022-12-20T19:29:37Z) - BERT WEAVER: Using WEight AVERaging to enable lifelong learning for
transformer-based models in biomedical semantic search engines [49.75878234192369]
We present WEAVER, a simple, yet efficient post-processing method that infuses old knowledge into the new model.
We show that applying WEAVER in a sequential manner results in similar word embedding distributions as doing a combined training on all data at once.
arXiv Detail & Related papers (2022-02-21T10:34:41Z) - Low-Regret Active learning [64.36270166907788]
We develop an online learning algorithm for identifying unlabeled data points that are most informative for training.
At the core of our work is an efficient algorithm for sleeping experts that is tailored to achieve low regret on predictable (easy) instances.
arXiv Detail & Related papers (2021-04-06T22:53:45Z) - Ask-n-Learn: Active Learning via Reliable Gradient Representations for
Image Classification [29.43017692274488]
Deep predictive models rely on human supervision in the form of labeled training data.
We propose Ask-n-Learn, an active learning approach based on gradient embeddings obtained using the pesudo-labels estimated in each of the algorithm.
arXiv Detail & Related papers (2020-09-30T05:19:56Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - Confident Coreset for Active Learning in Medical Image Analysis [57.436224561482966]
We propose a novel active learning method, confident coreset, which considers both uncertainty and distribution for effectively selecting informative samples.
By comparative experiments on two medical image analysis tasks, we show that our method outperforms other active learning methods.
arXiv Detail & Related papers (2020-04-05T13:46:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.