Closer Look at the Uncertainty Estimation in Semantic Segmentation under
Distributional Shift
- URL: http://arxiv.org/abs/2106.00076v1
- Date: Mon, 31 May 2021 19:50:43 GMT
- Title: Closer Look at the Uncertainty Estimation in Semantic Segmentation under
Distributional Shift
- Authors: Sebastian Cygert, Bart{\l}omiej Wr\'oblewski, Karol Wo\'zniak,
Rados{\l}aw S{\l}owi\'nski, Andrzej Czy\.zewski
- Abstract summary: Uncertainty estimation for the task of semantic segmentation is evaluated under a varying level of domain shift.
It was shown that simple color transformations already provide a strong baseline.
ensemble of models was utilized in the self-training setting to improve the pseudo-labels generation.
- Score: 2.05617385614792
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While recent computer vision algorithms achieve impressive performance on
many benchmarks, they lack robustness - presented with an image from a
different distribution, (e.g. weather or lighting conditions not considered
during training), they may produce an erroneous prediction. Therefore, it is
desired that such a model will be able to reliably predict its confidence
measure. In this work, uncertainty estimation for the task of semantic
segmentation is evaluated under a varying level of domain shift: in a
cross-dataset setting and when adapting a model trained on data from the
simulation. It was shown that simple color transformations already provide a
strong baseline, comparable to using more sophisticated style-transfer data
augmentation. Further, by constructing an ensemble consisting of models using
different backbones and/or augmentation methods, it was possible to improve
significantly model performance in terms of overall accuracy and uncertainty
estimation under the domain shift setting. The Expected Calibration Error (ECE)
on challenging GTA to Cityscapes adaptation was reduced from 4.05 to the
competitive value of 1.1. Further, an ensemble of models was utilized in the
self-training setting to improve the pseudo-labels generation, which resulted
in a significant gain in the final model accuracy, compared to the standard
fine-tuning (without ensemble).
Related papers
- On conditional diffusion models for PDE simulations [53.01911265639582]
We study score-based diffusion models for forecasting and assimilation of sparse observations.
We propose an autoregressive sampling approach that significantly improves performance in forecasting.
We also propose a new training strategy for conditional score-based models that achieves stable performance over a range of history lengths.
arXiv Detail & Related papers (2024-10-21T18:31:04Z) - Calibration of Time-Series Forecasting: Detecting and Adapting Context-Driven Distribution Shift [28.73747033245012]
We introduce a universal calibration methodology for the detection and adaptation of context-driven distribution shifts.
A novel CDS detector, termed the "residual-based CDS detector" or "Reconditionor", quantifies the model's vulnerability to CDS.
A high Reconditionor score indicates a severe susceptibility, thereby necessitating model adaptation.
arXiv Detail & Related papers (2023-10-23T11:58:01Z) - Multiclass Alignment of Confidence and Certainty for Network Calibration [10.15706847741555]
Recent studies reveal that deep neural networks (DNNs) are prone to making overconfident predictions.
We propose a new train-time calibration method, which features a simple, plug-and-play auxiliary loss known as multi-class alignment of predictive mean confidence and predictive certainty (MACC)
Our method achieves state-of-the-art calibration performance for both in-domain and out-domain predictions.
arXiv Detail & Related papers (2023-09-06T00:56:24Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Estimating Model Performance under Domain Shifts with Class-Specific
Confidence Scores [25.162667593654206]
We introduce class-wise calibration within the framework of performance estimation for imbalanced datasets.
We conduct experiments on four tasks and find the proposed modifications consistently improve the estimation accuracy for imbalanced datasets.
arXiv Detail & Related papers (2022-07-20T15:04:32Z) - Performance Prediction Under Dataset Shift [1.1602089225841632]
We study the generalization capabilities of various performance prediction models to new domains by learning on generated synthetic perturbations.
We propose a natural and effortless uncertainty estimation of the predicted accuracy that ensures reliable use of performance predictors.
arXiv Detail & Related papers (2022-06-21T19:40:58Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - Learning Prediction Intervals for Model Performance [1.433758865948252]
We propose a method to compute prediction intervals for model performance.
We evaluate our approach across a wide range of drift conditions and show substantial improvement over competitive baselines.
arXiv Detail & Related papers (2020-12-15T21:32:03Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Evaluating Prediction-Time Batch Normalization for Robustness under
Covariate Shift [81.74795324629712]
We call prediction-time batch normalization, which significantly improves model accuracy and calibration under covariate shift.
We show that prediction-time batch normalization provides complementary benefits to existing state-of-the-art approaches for improving robustness.
The method has mixed results when used alongside pre-training, and does not seem to perform as well under more natural types of dataset shift.
arXiv Detail & Related papers (2020-06-19T05:08:43Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.