Double-Uncertainty Weighted Method for Semi-supervised Learning
- URL: http://arxiv.org/abs/2010.09298v1
- Date: Mon, 19 Oct 2020 08:20:18 GMT
- Title: Double-Uncertainty Weighted Method for Semi-supervised Learning
- Authors: Yixin Wang, Yao Zhang, Jiang Tian, Cheng Zhong, Zhongchao Shi, Yang
Zhang, Zhiqiang He
- Abstract summary: We propose a double-uncertainty weighted method for semi-supervised segmentation based on the teacher-student model.
We train the teacher model using Bayesian deep learning to obtain double-uncertainty, i.e. segmentation uncertainty and feature uncertainty.
Our method outperforms the state-of-the-art uncertainty-based semi-supervised methods on two public medical datasets.
- Score: 32.484750353853954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Though deep learning has achieved advanced performance recently, it remains a
challenging task in the field of medical imaging, as obtaining reliable labeled
training data is time-consuming and expensive. In this paper, we propose a
double-uncertainty weighted method for semi-supervised segmentation based on
the teacher-student model. The teacher model provides guidance for the student
model by penalizing their inconsistent prediction on both labeled and unlabeled
data. We train the teacher model using Bayesian deep learning to obtain
double-uncertainty, i.e. segmentation uncertainty and feature uncertainty. It
is the first to extend segmentation uncertainty estimation to feature
uncertainty, which reveals the capability to capture information among
channels. A learnable uncertainty consistency loss is designed for the
unsupervised learning process in an interactive manner between prediction and
uncertainty. With no ground-truth for supervision, it can still incentivize
more accurate teacher's predictions and facilitate the model to reduce
uncertain estimations. Furthermore, our proposed double-uncertainty serves as a
weight on each inconsistency penalty to balance and harmonize supervised and
unsupervised training processes. We validate the proposed feature uncertainty
and loss function through qualitative and quantitative analyses. Experimental
results show that our method outperforms the state-of-the-art uncertainty-based
semi-supervised methods on two public medical datasets.
Related papers
- Uncertainty for Active Learning on Graphs [70.44714133412592]
Uncertainty Sampling is an Active Learning strategy that aims to improve the data efficiency of machine learning models.
We benchmark Uncertainty Sampling beyond predictive uncertainty and highlight a significant performance gap to other Active Learning strategies.
We develop ground-truth Bayesian uncertainty estimates in terms of the data generating process and prove their effectiveness in guiding Uncertainty Sampling toward optimal queries.
arXiv Detail & Related papers (2024-05-02T16:50:47Z) - Error-Driven Uncertainty Aware Training [7.702016079410588]
Error-Driven Uncertainty Aware Training aims to enhance the ability of neural classifiers to estimate their uncertainty correctly.
The EUAT approach operates during the model's training phase by selectively employing two loss functions depending on whether the training examples are correctly or incorrectly predicted.
We evaluate EUAT using diverse neural models and datasets in the image recognition domains considering both non-adversarial and adversarial settings.
arXiv Detail & Related papers (2024-05-02T11:48:14Z) - Modeling the Uncertainty with Maximum Discrepant Students for
Semi-supervised 2D Pose Estimation [57.17120203327993]
We propose a framework to estimate the quality of pseudo-labels in semi-supervised pose estimation tasks.
Our method improves the performance of semi-supervised pose estimation on three datasets.
arXiv Detail & Related papers (2023-11-03T08:11:06Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Uncertainty-Aware Deep Co-training for Semi-supervised Medical Image
Segmentation [4.935055133266873]
We propose a novel uncertainty-aware scheme to make models learn regions purposefully.
Specifically, we employ Monte Carlo Sampling as an estimation method to attain an uncertainty map.
In the backward process, we joint unsupervised and supervised losses to accelerate the convergence of the network.
arXiv Detail & Related papers (2021-11-23T03:26:24Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Exploring Uncertainty in Deep Learning for Construction of Prediction
Intervals [27.569681578957645]
We explore the uncertainty in deep learning to construct prediction intervals.
We design a special loss function, which enables us to learn uncertainty without uncertainty label.
Our method correlates the construction of prediction intervals with the uncertainty estimation.
arXiv Detail & Related papers (2021-04-27T02:58:20Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via
Higher-Order Influence Functions [121.10450359856242]
We develop a frequentist procedure that utilizes influence functions of a model's loss functional to construct a jackknife (or leave-one-out) estimator of predictive confidence intervals.
The DJ satisfies (1) and (2), is applicable to a wide range of deep learning models, is easy to implement, and can be applied in a post-hoc fashion without interfering with model training or compromising its accuracy.
arXiv Detail & Related papers (2020-06-29T13:36:52Z) - Getting a CLUE: A Method for Explaining Uncertainty Estimates [30.367995696223726]
We propose a novel method for interpreting uncertainty estimates from differentiable probabilistic models.
Our method, Counterfactual Latent Uncertainty Explanations (CLUE), indicates how to change an input, while keeping it on the data manifold.
arXiv Detail & Related papers (2020-06-11T21:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.