On uncertainty estimation in active learning for image segmentation
- URL: http://arxiv.org/abs/2007.06364v1
- Date: Mon, 13 Jul 2020 13:20:32 GMT
- Title: On uncertainty estimation in active learning for image segmentation
- Authors: Bo Li, Tommy Sonne Alstr{\o}m
- Abstract summary: Uncertainty estimation is important for interpreting the trustworthiness of machine learning models in many applications.
In this paper, we explore uncertainty calibration within an active learning framework for medical image segmentation.
We observe that selecting regions to annotate instead of full images leads to more well-calibrated models.
- Score: 7.05949591248206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainty estimation is important for interpreting the trustworthiness of
machine learning models in many applications. This is especially critical in
the data-driven active learning setting where the goal is to achieve a certain
accuracy with minimum labeling effort. In such settings, the model learns to
select the most informative unlabeled samples for annotation based on its
estimated uncertainty. The highly uncertain predictions are assumed to be more
informative for improving model performance. In this paper, we explore
uncertainty calibration within an active learning framework for medical image
segmentation, an area where labels often are scarce. Various uncertainty
estimation methods and acquisition strategies (regions and full images) are
investigated. We observe that selecting regions to annotate instead of full
images leads to more well-calibrated models. Additionally, we experimentally
show that annotating regions can cut 50% of pixels that need to be labeled by
humans compared to annotating full images.
Related papers
- Uncertainty evaluation of segmentation models for Earth observation [4.350621291554061]
This paper investigates methods for estimating uncertainty in semantic segmentation predictions derived from satellite imagery.<n>Our evaluation focuses on the practical utility of uncertainty measures, testing their ability to identify prediction errors and noise-corrupted input image regions.
arXiv Detail & Related papers (2025-10-22T13:39:28Z) - Probably Approximately Precision and Recall Learning [60.00180898830079]
A key challenge in machine learning is the prevalence of one-sided feedback.<n>We introduce a Probably Approximately Correct (PAC) framework in which hypotheses are set functions that map each input to a set of labels.<n>We develop new algorithms that learn from positive data alone, achieving optimal sample complexity in the realizable case.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - Virtual Category Learning: A Semi-Supervised Learning Method for Dense
Prediction with Extremely Limited Labels [63.16824565919966]
This paper proposes to use confusing samples proactively without label correction.
A Virtual Category (VC) is assigned to each confusing sample in such a way that it can safely contribute to the model optimisation.
Our intriguing findings highlight the usage of VC learning in dense vision tasks.
arXiv Detail & Related papers (2023-12-02T16:23:52Z) - Anatomically-aware Uncertainty for Semi-supervised Image Segmentation [12.175556059523863]
Semi-supervised learning relaxes the need of large pixel-wise labeled datasets for image segmentation by leveraging unlabeled data.
Uncertainty estimation methods rely on multiple inferences from the model predictions that must be computed for each training step.
This work proposes a novel method to estimate segmentation uncertainty by leveraging global information from the segmentation masks.
arXiv Detail & Related papers (2023-10-24T18:03:07Z) - Introspective Deep Metric Learning [91.47907685364036]
We propose an introspective deep metric learning framework for uncertainty-aware comparisons of images.
The proposed IDML framework improves the performance of deep metric learning through uncertainty modeling.
arXiv Detail & Related papers (2023-09-11T16:21:13Z) - Hierarchical Uncertainty Estimation for Medical Image Segmentation
Networks [1.9564356751775307]
Uncertainty exists in both images (noise) and manual annotations (human errors and bias) used for model training.
We propose a simple yet effective method for estimating uncertainties at multiple levels.
We demonstrate that a deep learning segmentation network such as U-net, can achieve a high segmentation performance.
arXiv Detail & Related papers (2023-08-16T16:09:23Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Calibrating Ensembles for Scalable Uncertainty Quantification in Deep
Learning-based Medical Segmentation [0.42008820076301906]
Uncertainty quantification in automated image analysis is highly desired in many applications.
Current uncertainty quantification approaches do not scale well in high-dimensional real-world problems.
We propose a scalable and intuitive framework to calibrate ensembles of deep learning models to produce uncertainty quantification measurements.
arXiv Detail & Related papers (2022-09-20T09:09:48Z) - BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen
Neural Networks [50.15201777970128]
We propose BayesCap that learns a Bayesian identity mapping for the frozen model, allowing uncertainty estimation.
BayesCap is a memory-efficient method that can be trained on a small fraction of the original dataset.
We show the efficacy of our method on a wide variety of tasks with a diverse set of architectures.
arXiv Detail & Related papers (2022-07-14T12:50:09Z) - Semi-supervised Deep Learning for Image Classification with Distribution
Mismatch: A Survey [1.5469452301122175]
Deep learning models rely on the abundance of labelled observations to train a prospective model.
It is expensive to gather labelled observations of data, making the usage of deep learning models not ideal.
In many situations different unlabelled data sources might be available.
This raises the risk of a significant distribution mismatch between the labelled and unlabelled datasets.
arXiv Detail & Related papers (2022-03-01T02:46:00Z) - Data-Uncertainty Guided Multi-Phase Learning for Semi-Supervised Object
Detection [66.10057490293981]
We propose a data-uncertainty guided multi-phase learning method for semi-supervised object detection.
Our method behaves extraordinarily compared to baseline approaches and outperforms them by a large margin.
arXiv Detail & Related papers (2021-03-29T09:27:23Z) - Minimax Active Learning [61.729667575374606]
Active learning aims to develop label-efficient algorithms by querying the most representative samples to be labeled by a human annotator.
Current active learning techniques either rely on model uncertainty to select the most uncertain samples or use clustering or reconstruction to choose the most diverse set of unlabeled examples.
We develop a semi-supervised minimax entropy-based active learning algorithm that leverages both uncertainty and diversity in an adversarial manner.
arXiv Detail & Related papers (2020-12-18T19:03:40Z) - Ask-n-Learn: Active Learning via Reliable Gradient Representations for
Image Classification [29.43017692274488]
Deep predictive models rely on human supervision in the form of labeled training data.
We propose Ask-n-Learn, an active learning approach based on gradient embeddings obtained using the pesudo-labels estimated in each of the algorithm.
arXiv Detail & Related papers (2020-09-30T05:19:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.