Introspective Robot Perception using Smoothed Predictions from Bayesian
Neural Networks
- URL: http://arxiv.org/abs/2109.12869v1
- Date: Mon, 27 Sep 2021 08:40:19 GMT
- Title: Introspective Robot Perception using Smoothed Predictions from Bayesian
Neural Networks
- Authors: Jianxiang Feng, Maximilian Durner, Zoltan-Csaba Marton, Ferenc
Balint-Benczedi, and Rudolph Triebel
- Abstract summary: This work focuses on improving uncertainty estimation in the field of object classification from RGB images.
We employ a (BNN) and evaluate two practical inference techniques to obtain better uncertainty estimates.
We show a performance increase using more reliable uncertainty estimates as unary potentials within a Conditional Random Field.
- Score: 17.162534445528827
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work focuses on improving uncertainty estimation in the field of object
classification from RGB images and demonstrates its benefits in two robotic
applications. We employ a (BNN), and evaluate two practical inference
techniques to obtain better uncertainty estimates, namely Concrete Dropout
(CDP) and Kronecker-factored Laplace Approximation (LAP). We show a performance
increase using more reliable uncertainty estimates as unary potentials within a
Conditional Random Field (CRF), which is able to incorporate contextual
information as well. Furthermore, the obtained uncertainties are exploited to
achieve domain adaptation in a semi-supervised manner, which requires less
manual efforts in annotating data. We evaluate our approach on two public
benchmark datasets that are relevant for robot perception tasks.
Related papers
- A Review of Bayesian Uncertainty Quantification in Deep Probabilistic Image Segmentation [0.0]
Advancements in image segmentation play an integral role within the greater scope of Deep Learning-based computer vision.
Uncertainty quantification has been extensively studied within this context, enabling expression of model ignorance (epistemic uncertainty) or data ambiguity (aleatoric uncertainty) to prevent uninformed decision making.
This work provides a comprehensive overview of probabilistic segmentation by discussing fundamental concepts in uncertainty that govern advancements in the field and the application to various tasks.
arXiv Detail & Related papers (2024-11-25T13:26:09Z) - Error-Driven Uncertainty Aware Training [7.702016079410588]
Error-Driven Uncertainty Aware Training aims to enhance the ability of neural classifiers to estimate their uncertainty correctly.
The EUAT approach operates during the model's training phase by selectively employing two loss functions depending on whether the training examples are correctly or incorrectly predicted.
We evaluate EUAT using diverse neural models and datasets in the image recognition domains considering both non-adversarial and adversarial settings.
arXiv Detail & Related papers (2024-05-02T11:48:14Z) - Enabling Uncertainty Estimation in Iterative Neural Networks [49.56171792062104]
We develop an approach to uncertainty estimation that provides state-of-the-art estimates at a much lower computational cost than techniques like Ensembles.
We demonstrate its practical value by embedding it in two application domains: road detection in aerial images and the estimation of aerodynamic properties of 2D and 3D shapes.
arXiv Detail & Related papers (2024-03-25T13:06:31Z) - Lightweight, Uncertainty-Aware Conformalized Visual Odometry [2.429910016019183]
Data-driven visual odometry (VO) is a critical subroutine for autonomous edge robotics.
Emerging edge robotics devices like insect-scale drones and surgical robots lack a computationally efficient framework to estimate VO's predictive uncertainties.
This paper presents a novel, lightweight, and statistically robust framework that leverages conformal inference (CI) to extract VO's uncertainty bands.
arXiv Detail & Related papers (2023-03-03T20:37:55Z) - Uncertainty-Aware Lidar Place Recognition in Novel Environments [11.30020653282995]
We investigate the task of uncertainty-aware lidar place recognition.
Each predicted place must have an associated uncertainty that can be used to identify and reject incorrect predictions.
We introduce a novel evaluation protocol and present the first comprehensive benchmark for this task.
arXiv Detail & Related papers (2022-10-04T04:06:44Z) - Leveraging Unlabeled Data to Predict Out-of-Distribution Performance [63.740181251997306]
Real-world machine learning deployments are characterized by mismatches between the source (training) and target (test) distributions.
In this work, we investigate methods for predicting the target domain accuracy using only labeled source data and unlabeled target data.
We propose Average Thresholded Confidence (ATC), a practical method that learns a threshold on the model's confidence, predicting accuracy as the fraction of unlabeled examples.
arXiv Detail & Related papers (2022-01-11T23:01:12Z) - Delving into Probabilistic Uncertainty for Unsupervised Domain Adaptive
Person Re-Identification [54.174146346387204]
We propose an approach named probabilistic uncertainty guided progressive label refinery (P$2$LR) for domain adaptive person re-identification.
A quantitative criterion is established to measure the uncertainty of pseudo labels and facilitate the network training.
Our method outperforms the baseline by 6.5% mAP on the Duke2Market task, while surpassing the state-of-the-art method by 2.5% mAP on the Market2MSMT task.
arXiv Detail & Related papers (2021-12-28T07:40:12Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - The Benefit of the Doubt: Uncertainty Aware Sensing for Edge Computing
Platforms [10.86298377998459]
We propose an efficient framework for predictive uncertainty estimation in NNs deployed on embedded edge systems.
The framework is built from the ground up to provide predictive uncertainty based only on one forward pass.
Our approach not only obtains robust and accurate uncertainty estimations but also outperforms state-of-the-art methods in terms of systems performance.
arXiv Detail & Related papers (2021-02-11T11:44:32Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z) - Getting a CLUE: A Method for Explaining Uncertainty Estimates [30.367995696223726]
We propose a novel method for interpreting uncertainty estimates from differentiable probabilistic models.
Our method, Counterfactual Latent Uncertainty Explanations (CLUE), indicates how to change an input, while keeping it on the data manifold.
arXiv Detail & Related papers (2020-06-11T21:53:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.