Latent Discriminant deterministic Uncertainty
        - URL: http://arxiv.org/abs/2207.10130v1
 - Date: Wed, 20 Jul 2022 18:18:40 GMT
 - Title: Latent Discriminant deterministic Uncertainty
 - Authors: Gianni Franchi, Xuanlong Yu, Andrei Bursuc, Emanuel Aldea, Severine
  Dubuisson, David Filliat
 - Abstract summary: We propose a scalable and effective Deterministic Uncertainty Methods (DUM) for high-resolution semantic segmentation.
Our approach achieves competitive results over Deep Ensembles, the state-of-the-art for uncertainty prediction, on image classification, depth segmentation and monocular estimation tasks.
 - Score: 11.257956169255193
 - License: http://creativecommons.org/licenses/by/4.0/
 - Abstract:   Predictive uncertainty estimation is essential for deploying Deep Neural
Networks in real-world autonomous systems. However, most successful approaches
are computationally intensive. In this work, we attempt to address these
challenges in the context of autonomous driving perception tasks. Recently
proposed Deterministic Uncertainty Methods (DUM) can only partially meet such
requirements as their scalability to complex computer vision tasks is not
obvious. In this work we advance a scalable and effective DUM for
high-resolution semantic segmentation, that relaxes the Lipschitz constraint
typically hindering practicality of such architectures. We learn a discriminant
latent space by leveraging a distinction maximization layer over an
arbitrarily-sized set of trainable prototypes. Our approach achieves
competitive results over Deep Ensembles, the state-of-the-art for uncertainty
prediction, on image classification, segmentation and monocular depth
estimation tasks. Our code is available at https://github.com/ENSTA-U2IS/LDU
 
       
      
        Related papers
        - A Comparative Study on Multi-task Uncertainty Quantification in Semantic   Segmentation and Monocular Depth Estimation [9.52671061354338]
We evaluate Monte Carlo Dropout, Deep Sub-Ensembles, and Deep Ensembles for joint semantic segmentation and monocular depth estimation.
Deep Ensembles stand out as the preferred choice, particularly in out-of-domain scenarios.
We highlight the impact of employing different uncertainty thresholds to classify pixels as certain or uncertain.
arXiv  Detail & Related papers  (2024-05-27T12:12:26Z) - Hierarchical Invariance for Robust and Interpretable Vision Tasks at   Larger Scales [54.78115855552886]
We show how to construct over-complete invariants with a Convolutional Neural Networks (CNN)-like hierarchical architecture.
With the over-completeness, discriminative features w.r.t. the task can be adaptively formed in a Neural Architecture Search (NAS)-like manner.
For robust and interpretable vision tasks at larger scales, hierarchical invariant representation can be considered as an effective alternative to traditional CNN and invariants.
arXiv  Detail & Related papers  (2024-02-23T16:50:07Z) - The Boundaries of Verifiable Accuracy, Robustness, and Generalisation in   Deep Learning [71.14237199051276]
We consider classical distribution-agnostic framework and algorithms minimising empirical risks.
We show that there is a large family of tasks for which computing and verifying ideal stable and accurate neural networks is extremely challenging.
arXiv  Detail & Related papers  (2023-09-13T16:33:27Z) - Single Image Depth Prediction Made Better: A Multivariate Gaussian Take [163.14849753700682]
We introduce an approach that performs continuous modeling of per-pixel depth.
Our method's accuracy (named MG) is among the top on the KITTI depth-prediction benchmark leaderboard.
arXiv  Detail & Related papers  (2023-03-31T16:01:03Z) - DUDES: Deep Uncertainty Distillation using Ensembles for Semantic
  Segmentation [11.099838952805325]
Quantifying the predictive uncertainty is a promising endeavour to open up the use of deep neural networks for such applications.
We present a novel approach for efficient and reliable uncertainty estimation which we call Deep Uncertainty Distillation using Ensembles (DUDES)
DUDES applies student-teacher distillation with a Deep Ensemble to accurately approximate predictive uncertainties with a single forward pass.
arXiv  Detail & Related papers  (2023-03-17T08:56:27Z) - Pixel-wise Gradient Uncertainty for Convolutional Neural Networks
  applied to Out-of-Distribution Segmentation [0.43512163406552007]
We present a method for obtaining uncertainty scores from pixel-wise loss gradients which can be computed efficiently during inference.
Our experiments show the ability of our method to identify wrong pixel classifications and to estimate prediction quality at negligible computational overhead.
arXiv  Detail & Related papers  (2023-03-13T08:37:59Z) - Modeling Multimodal Aleatoric Uncertainty in Segmentation with Mixture
  of Stochastic Expert [24.216869988183092]
We focus on capturing the data-inherent uncertainty (aka aleatoric uncertainty) in segmentation, typically when ambiguities exist in input images.
We propose a novel mixture of experts (MoSE) model, where each expert network estimates a distinct mode of aleatoric uncertainty.
We develop a Wasserstein-like loss that directly minimizes the distribution distance between the MoSE and ground truth annotations.
arXiv  Detail & Related papers  (2022-12-14T16:48:21Z) - Uncertainty-aware LiDAR Panoptic Segmentation [21.89063036529791]
We introduce a novel approach for solving the task of uncertainty-aware panoptic segmentation using LiDAR point clouds.
Our proposed EvLPSNet network is the first to solve this task efficiently in a sampling-free manner.
We provide several strong baselines combining state-of-the-art panoptic segmentation networks with sampling-free uncertainty estimation techniques.
arXiv  Detail & Related papers  (2022-10-10T07:54:57Z) - BayesCap: Bayesian Identity Cap for Calibrated Uncertainty in Frozen
  Neural Networks [50.15201777970128]
We propose BayesCap that learns a Bayesian identity mapping for the frozen model, allowing uncertainty estimation.
BayesCap is a memory-efficient method that can be trained on a small fraction of the original dataset.
We show the efficacy of our method on a wide variety of tasks with a diverse set of architectures.
arXiv  Detail & Related papers  (2022-07-14T12:50:09Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
  Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv  Detail & Related papers  (2021-05-28T09:23:05Z) - Diversity inducing Information Bottleneck in Model Ensembles [73.80615604822435]
In this paper, we target the problem of generating effective ensembles of neural networks by encouraging diversity in prediction.
We explicitly optimize a diversity inducing adversarial loss for learning latent variables and thereby obtain diversity in the output predictions necessary for modeling multi-modal data.
Compared to the most competitive baselines, we show significant improvements in classification accuracy, under a shift in the data distribution.
arXiv  Detail & Related papers  (2020-03-10T03:10:41Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.