Uncertainty Estimation in Instance Segmentation with Star-convex Shapes
- URL: http://arxiv.org/abs/2309.10513v1
- Date: Tue, 19 Sep 2023 10:49:33 GMT
- Title: Uncertainty Estimation in Instance Segmentation with Star-convex Shapes
- Authors: Qasim M. K. Siddiqui, Sebastian Starke and Peter Steinbach
- Abstract summary: Deep neural network-based algorithms often exhibit incorrect predictions with unwarranted confidence levels.
Our study addresses the challenge of estimating spatial certainty with the location of instances with star- shapes.
Our study demonstrates that combining fractional certainty estimation over individual certainty scores is an effective strategy.
- Score: 4.197316670989004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Instance segmentation has witnessed promising advancements through deep
neural network-based algorithms. However, these models often exhibit incorrect
predictions with unwarranted confidence levels. Consequently, evaluating
prediction uncertainty becomes critical for informed decision-making. Existing
methods primarily focus on quantifying uncertainty in classification or
regression tasks, lacking emphasis on instance segmentation. Our research
addresses the challenge of estimating spatial certainty associated with the
location of instances with star-convex shapes. Two distinct clustering
approaches are evaluated which compute spatial and fractional certainty per
instance employing samples by the Monte-Carlo Dropout or Deep Ensemble
technique. Our study demonstrates that combining spatial and fractional
certainty scores yields improved calibrated estimation over individual
certainty scores. Notably, our experimental results show that the Deep Ensemble
technique alongside our novel radial clustering approach proves to be an
effective strategy. Our findings emphasize the significance of evaluating the
calibration of estimated certainties for model reliability and decision-making.
Related papers
- Uncertainty Quantification in Seismic Inversion Through Integrated Importance Sampling and Ensemble Methods [2.2530496464901106]
In deep learning-based seismic inversion, uncertainty arises from various sources, including data noise, neural network design and training, and inherent data limitations.
This study introduces a novel approach to uncertainty quantification in seismic inversion by integrating ensemble methods with importance sampling.
arXiv Detail & Related papers (2024-09-10T19:53:12Z) - Towards Calibrated Deep Clustering Network [60.71776081164377]
In deep clustering, the estimated confidence for a sample belonging to a particular cluster greatly exceeds its actual prediction accuracy.
We propose a novel dual-head (calibration head and clustering head) deep clustering model that can effectively calibrate the estimated confidence and the actual accuracy.
Extensive experiments demonstrate the proposed calibrated deep clustering model not only surpasses state-of-the-art deep clustering methods by 10 times in terms of expected calibration error but also significantly outperforms them in terms of clustering accuracy.
arXiv Detail & Related papers (2024-03-04T11:23:40Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Uncertainty Quantification for Traffic Forecasting: A Unified Approach [21.556559649467328]
Uncertainty is an essential consideration for time series forecasting tasks.
In this work, we focus on quantifying the uncertainty of traffic forecasting.
We develop Deep S-Temporal Uncertainty Quantification (STUQ), which can estimate both aleatoric and relational uncertainty.
arXiv Detail & Related papers (2022-08-11T15:21:53Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Gradient-Based Quantification of Epistemic Uncertainty for Deep Object
Detectors [8.029049649310213]
We introduce novel gradient-based uncertainty metrics and investigate them for different object detection architectures.
Experiments show significant improvements in true positive / false positive discrimination and prediction of intersection over union.
We also find improvement over Monte-Carlo dropout uncertainty metrics and further significant boosts by aggregating different sources of uncertainty metrics.
arXiv Detail & Related papers (2021-07-09T16:04:11Z) - Uncertainty quantification for distributed regression [2.28438857884398]
We propose a fully data-driven approach to quantify uncertainty of the averaged estimator.
Namely, we construct simultaneous element-wise confidence bands for the predictions yielded by the averaged estimator on a given deterministic prediction set.
As a by-product of our analysis we also obtain a sup-norm consistency result for the divide-and-conquer Kernel Ridge Regression.
arXiv Detail & Related papers (2021-05-24T17:33:19Z) - The Aleatoric Uncertainty Estimation Using a Separate Formulation with
Virtual Residuals [51.71066839337174]
Existing methods can quantify the error in the target estimation, but they tend to underestimate it.
We propose a new separable formulation for the estimation of a signal and of its uncertainty, avoiding the effect of overfitting.
We demonstrate that the proposed method outperforms a state-of-the-art technique for signal and uncertainty estimation.
arXiv Detail & Related papers (2020-11-03T12:11:27Z) - Probabilistic Deep Learning for Instance Segmentation [9.62543698736491]
We propose a generic method to obtain model-inherent uncertainty estimates within proposal-free instance segmentation models.
We evaluate our method on the BBBC010 C. elegans dataset, where it yields competitive performance.
arXiv Detail & Related papers (2020-08-24T19:51:48Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z) - Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep
Learning [70.72363097550483]
In this study, we focus on in-domain uncertainty for image classification.
To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE)
arXiv Detail & Related papers (2020-02-15T23:28:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.