Semi-Supervised Deep Regression with Uncertainty Consistency and
Variational Model Ensembling via Bayesian Neural Networks
- URL: http://arxiv.org/abs/2302.07579v1
- Date: Wed, 15 Feb 2023 10:40:51 GMT
- Title: Semi-Supervised Deep Regression with Uncertainty Consistency and
Variational Model Ensembling via Bayesian Neural Networks
- Authors: Weihang Dai, Xiaomeng Li, Kwang-Ting Cheng
- Abstract summary: We propose a novel approach to semi-supervised regression, namely Uncertainty-Consistent Variational Model Ensembling (UCVME)
Our consistency loss significantly improves uncertainty estimates and allows higher quality pseudo-labels to be assigned greater importance under heteroscedastic regression.
Experiments show that our method outperforms state-of-the-art alternatives on different tasks and can be competitive with supervised methods that use full labels.
- Score: 31.67508478764597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep regression is an important problem with numerous applications. These
range from computer vision tasks such as age estimation from photographs, to
medical tasks such as ejection fraction estimation from echocardiograms for
disease tracking. Semi-supervised approaches for deep regression are notably
under-explored compared to classification and segmentation tasks, however.
Unlike classification tasks, which rely on thresholding functions for
generating class pseudo-labels, regression tasks use real number target
predictions directly as pseudo-labels, making them more sensitive to prediction
quality. In this work, we propose a novel approach to semi-supervised
regression, namely Uncertainty-Consistent Variational Model Ensembling (UCVME),
which improves training by generating high-quality pseudo-labels and
uncertainty estimates for heteroscedastic regression. Given that aleatoric
uncertainty is only dependent on input data by definition and should be equal
for the same inputs, we present a novel uncertainty consistency loss for
co-trained models. Our consistency loss significantly improves uncertainty
estimates and allows higher quality pseudo-labels to be assigned greater
importance under heteroscedastic regression. Furthermore, we introduce a novel
variational model ensembling approach to reduce prediction noise and generate
more robust pseudo-labels. We analytically show our method generates higher
quality targets for unlabeled data and further improves training. Experiments
show that our method outperforms state-of-the-art alternatives on different
tasks and can be competitive with supervised methods that use full labels. Our
code is available at https://github.com/xmed-lab/UCVME.
Related papers
- Beyond the Norms: Detecting Prediction Errors in Regression Models [26.178065248948773]
This paper tackles the challenge of detecting unreliable behavior in regression algorithms.
We introduce the notion of unreliability in regression, when the output of the regressor exceeds a specified discrepancy (or error)
We show empirical improvements in error detection for multiple regression tasks, consistently outperforming popular baseline approaches.
arXiv Detail & Related papers (2024-06-11T05:51:44Z) - Variational Classification [51.2541371924591]
We derive a variational objective to train the model, analogous to the evidence lower bound (ELBO) used to train variational auto-encoders.
Treating inputs to the softmax layer as samples of a latent variable, our abstracted perspective reveals a potential inconsistency.
We induce a chosen latent distribution, instead of the implicit assumption found in a standard softmax layer.
arXiv Detail & Related papers (2023-05-17T17:47:19Z) - How Reliable is Your Regression Model's Uncertainty Under Real-World
Distribution Shifts? [46.05502630457458]
We propose a benchmark of 8 image-based regression datasets with different types of challenging distribution shifts.
We find that while methods are well calibrated when there is no distribution shift, they all become highly overconfident on many of the benchmark datasets.
arXiv Detail & Related papers (2023-02-07T18:54:39Z) - Benchmarking common uncertainty estimation methods with
histopathological images under domain shift and label noise [62.997667081978825]
In high-risk environments, deep learning models need to be able to judge their uncertainty and reject inputs when there is a significant chance of misclassification.
We conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole Slide Images.
We observe that ensembles of methods generally lead to better uncertainty estimates as well as an increased robustness towards domain shifts and label noise.
arXiv Detail & Related papers (2023-01-03T11:34:36Z) - Semi-supervised Contrastive Outlier removal for Pseudo Expectation
Maximization (SCOPE) [2.33877878310217]
We present a new approach to suppress confounding errors through a method we describe as Semi-supervised Contrastive Outlier removal for Pseudo Expectation Maximization (SCOPE)
Our results show that SCOPE greatly improves semi-supervised classification accuracy over a baseline, and furthermore when combined with consistency regularization achieves the highest reported accuracy for the semi-supervised CIFAR-10 classification task using 250 and 4000 labeled samples.
arXiv Detail & Related papers (2022-06-28T19:32:50Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - X-model: Improving Data Efficiency in Deep Learning with A Minimax Model [78.55482897452417]
We aim at improving data efficiency for both classification and regression setups in deep learning.
To take the power of both worlds, we propose a novel X-model.
X-model plays a minimax game between the feature extractor and task-specific heads.
arXiv Detail & Related papers (2021-10-09T13:56:48Z) - Exploiting Sample Uncertainty for Domain Adaptive Person
Re-Identification [137.9939571408506]
We estimate and exploit the credibility of the assigned pseudo-label of each sample to alleviate the influence of noisy labels.
Our uncertainty-guided optimization brings significant improvement and achieves the state-of-the-art performance on benchmark datasets.
arXiv Detail & Related papers (2020-12-16T04:09:04Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Mitigating Class Boundary Label Uncertainty to Reduce Both Model Bias
and Variance [4.563176550691304]
We investigate a new approach to handle inaccuracy and uncertainty in the training data labels.
Our method can reduce both bias and variance by estimating the pointwise label uncertainty of the training set.
arXiv Detail & Related papers (2020-02-23T18:24:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.