Uncertainty estimation in Deep Learning for Panoptic segmentation
- URL: http://arxiv.org/abs/2304.02098v2
- Date: Sun, 8 Sep 2024 18:54:24 GMT
- Title: Uncertainty estimation in Deep Learning for Panoptic segmentation
- Authors: Michael Smith, Frank Ferrie,
- Abstract summary: We show how ensemble-based uncertainty estimation approaches can be used in the panoptic segmentation domain.
Results are demonstrated on the COCO, KITTI-STEP and VIPER datasets.
- Score: 0.46040036610482665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As deep learning-based computer vision algorithms continue to advance the state of the art, their robustness to real-world data continues to be an issue, making it difficult to bring an algorithm from the lab to the real world. Ensemble-based uncertainty estimation approaches such as Monte Carlo Dropout have been successfully used in many applications in an attempt to address this robustness issue. Unfortunately, it is not always clear if such ensemble-based approaches can be applied to a new problem domain. This is the case with panoptic segmentation, where the structure of the problem and architectures designed to solve it means that unlike image classification or even semantic segmentation, the typical solution of using a mean across samples cannot be directly applied. In this paper, we demonstrate how ensemble-based uncertainty estimation approaches such as Monte Carlo Dropout can be used in the panoptic segmentation domain with no changes to an existing network, providing both improved performance and more importantly a better measure of uncertainty for predictions made by the network. Results are demonstrated quantitatively and qualitatively on the COCO, KITTI-STEP and VIPER datasets.
Related papers
- Malicious Internet Entity Detection Using Local Graph Inference [0.4893345190925178]
Detection of malicious behavior in a large network is a challenging problem for machine learning in computer security.
Current cybersec-tailored approaches are still limited in expressivity, and methods successful in other domains do not scale well for large volumes of data.
This work proposes a new perspective for learning from graph data that is modeling network entity interactions as a large heterogeneous graph.
arXiv Detail & Related papers (2024-08-06T16:35:25Z) - Digging Into Uncertainty-based Pseudo-label for Robust Stereo Matching [39.959000340261625]
We propose to dig into uncertainty estimation for robust stereo matching.
An uncertainty-based pseudo-label is proposed to adapt the pre-trained model to the new domain.
Our method shows strong cross-domain, adapt, and joint generalization and obtains textbf1st place on the stereo task of Robust Vision Challenge 2020.
arXiv Detail & Related papers (2023-07-31T09:11:31Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Modeling Multimodal Aleatoric Uncertainty in Segmentation with Mixture
of Stochastic Expert [24.216869988183092]
We focus on capturing the data-inherent uncertainty (aka aleatoric uncertainty) in segmentation, typically when ambiguities exist in input images.
We propose a novel mixture of experts (MoSE) model, where each expert network estimates a distinct mode of aleatoric uncertainty.
We develop a Wasserstein-like loss that directly minimizes the distribution distance between the MoSE and ground truth annotations.
arXiv Detail & Related papers (2022-12-14T16:48:21Z) - On Leave-One-Out Conditional Mutual Information For Generalization [122.2734338600665]
We derive information theoretic generalization bounds for supervised learning algorithms based on a new measure of leave-one-out conditional mutual information (loo-CMI)
Contrary to other CMI bounds, our loo-CMI bounds can be computed easily and can be interpreted in connection to other notions such as classical leave-one-out cross-validation.
We empirically validate the quality of the bound by evaluating its predicted generalization gap in scenarios for deep learning.
arXiv Detail & Related papers (2022-07-01T17:58:29Z) - Acquisition-invariant brain MRI segmentation with informative
uncertainties [3.46329153611365]
Post-hoc multi-site correction methods exist but have strong assumptions that often do not hold in real-world scenarios.
This body of work showcases such an algorithm, that can become robust to the physics of acquisition in the context of segmentation tasks.
We demonstrate that our method not only generalises to complete holdout datasets, preserving segmentation quality, but does so while also accounting for site-specific sequence choices.
arXiv Detail & Related papers (2021-11-07T13:58:04Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - Probabilistic Deep Learning for Instance Segmentation [9.62543698736491]
We propose a generic method to obtain model-inherent uncertainty estimates within proposal-free instance segmentation models.
We evaluate our method on the BBBC010 C. elegans dataset, where it yields competitive performance.
arXiv Detail & Related papers (2020-08-24T19:51:48Z) - Differentiable Causal Discovery from Interventional Data [141.41931444927184]
We propose a theoretically-grounded method based on neural networks that can leverage interventional data.
We show that our approach compares favorably to the state of the art in a variety of settings.
arXiv Detail & Related papers (2020-07-03T15:19:17Z) - Uncertainty-Aware Consistency Regularization for Cross-Domain Semantic
Segmentation [63.75774438196315]
Unsupervised domain adaptation (UDA) aims to adapt existing models of the source domain to a new target domain with only unlabeled data.
Most existing methods suffer from noticeable negative transfer resulting from either the error-prone discriminator network or the unreasonable teacher model.
We propose an uncertainty-aware consistency regularization method for cross-domain semantic segmentation.
arXiv Detail & Related papers (2020-04-19T15:30:26Z) - Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement
Learning [70.01650994156797]
Off- evaluation of sequential decision policies from observational data is necessary in batch reinforcement learning such as education healthcare.
We develop an approach that estimates the bounds of a given policy.
We prove convergence to the sharp bounds as we collect more confounded data.
arXiv Detail & Related papers (2020-02-11T16:18:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.