Predicting Safety Misbehaviours in Autonomous Driving Systems using Uncertainty Quantification
- URL: http://arxiv.org/abs/2404.18573v1
- Date: Mon, 29 Apr 2024 10:28:28 GMT
- Title: Predicting Safety Misbehaviours in Autonomous Driving Systems using Uncertainty Quantification
- Authors: Ruben Grewal, Paolo Tonella, Andrea Stocco,
- Abstract summary: This paper evaluates different uncertainty quantification methods from the deep learning domain for the anticipatory testing of safety-critical misbehaviours.
We compute uncertainty scores as the vehicle executes, following the intuition that high uncertainty scores are indicative of unsupported runtime conditions.
In our study, we conducted an evaluation of the effectiveness and computational overhead associated with two uncertainty quantification methods, namely MC- Dropout and Deep Ensembles, for misbehaviour avoidance.
- Score: 8.213390074932132
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The automated real-time recognition of unexpected situations plays a crucial role in the safety of autonomous vehicles, especially in unsupported and unpredictable scenarios. This paper evaluates different Bayesian uncertainty quantification methods from the deep learning domain for the anticipatory testing of safety-critical misbehaviours during system-level simulation-based testing. Specifically, we compute uncertainty scores as the vehicle executes, following the intuition that high uncertainty scores are indicative of unsupported runtime conditions that can be used to distinguish safe from failure-inducing driving behaviors. In our study, we conducted an evaluation of the effectiveness and computational overhead associated with two Bayesian uncertainty quantification methods, namely MC- Dropout and Deep Ensembles, for misbehaviour avoidance. Overall, for three benchmarks from the Udacity simulator comprising both out-of-distribution and unsafe conditions introduced via mutation testing, both methods successfully detected a high number of out-of-bounds episodes providing early warnings several seconds in advance, outperforming two state-of-the-art misbehaviour prediction methods based on autoencoders and attention maps in terms of effectiveness and efficiency. Notably, Deep Ensembles detected most misbehaviours without any false alarms and did so even when employing a relatively small number of models, making them computationally feasible for real-time detection. Our findings suggest that incorporating uncertainty quantification methods is a viable approach for building fail-safe mechanisms in deep neural network-based autonomous vehicles.
Related papers
- DADEE: Well-calibrated uncertainty quantification in neural networks for barriers-based robot safety [1.680461336282617]
Control Barrier Functions (CBFs) based approaches are popular because they are fast, yet safe.
GPs and MC-Dropout for learning and uncertainty estimation come with drawbacks.
GPs are non-parametric methods that are slow, while MC-Dropout does not capture aleatoric uncertainty.
We combine the two approaches to obtain more accurate uncertainty estimates both in- and out-of-domain.
arXiv Detail & Related papers (2024-06-30T07:55:32Z) - Cost-Sensitive Uncertainty-Based Failure Recognition for Object Detection [1.8990839669542954]
We propose a cost-sensitive framework for object detection tailored to user-defined budgets.
We derive minimum thresholding requirements to prevent performance degradation.
We automate and optimize the thresholding process to maximize the failure recognition rate.
arXiv Detail & Related papers (2024-04-26T14:03:55Z) - Uncertainty-Aware AB3DMOT by Variational 3D Object Detection [74.8441634948334]
Uncertainty estimation is an effective tool to provide statistically accurate predictions.
In this paper, we propose a Variational Neural Network-based TANet 3D object detector to generate 3D object detections with uncertainty.
arXiv Detail & Related papers (2023-02-12T14:30:03Z) - Bayesian autoencoders with uncertainty quantification: Towards
trustworthy anomaly detection [78.24964622317634]
In this work, the formulation of Bayesian autoencoders (BAEs) is adopted to quantify the total anomaly uncertainty.
To evaluate the quality of uncertainty, we consider the task of classifying anomalies with the additional option of rejecting predictions of high uncertainty.
Our experiments demonstrate the effectiveness of the BAE and total anomaly uncertainty on a set of benchmark datasets and two real datasets for manufacturing.
arXiv Detail & Related papers (2022-02-25T12:20:04Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Deep Learning based Uncertainty Decomposition for Real-time Control [9.067368638784355]
We propose a novel method for detecting the absence of training data using deep learning.
We show its advantages over existing approaches on synthetic and real-world datasets.
We further demonstrate the practicality of this uncertainty estimate in deploying online data-efficient control on a simulated quadcopter.
arXiv Detail & Related papers (2020-10-06T10:46:27Z) - Identifying Causal-Effect Inference Failure with Uncertainty-Aware
Models [41.53326337725239]
We introduce a practical approach for integrating uncertainty estimation into a class of state-of-the-art neural network methods.
We show that our methods enable us to deal gracefully with situations of "no-overlap", common in high-dimensional data.
We show that correctly modeling uncertainty can keep us from giving overconfident and potentially harmful recommendations.
arXiv Detail & Related papers (2020-07-01T00:37:41Z) - Discriminative Jackknife: Quantifying Uncertainty in Deep Learning via
Higher-Order Influence Functions [121.10450359856242]
We develop a frequentist procedure that utilizes influence functions of a model's loss functional to construct a jackknife (or leave-one-out) estimator of predictive confidence intervals.
The DJ satisfies (1) and (2), is applicable to a wide range of deep learning models, is easy to implement, and can be applied in a post-hoc fashion without interfering with model training or compromising its accuracy.
arXiv Detail & Related papers (2020-06-29T13:36:52Z) - A Comparison of Uncertainty Estimation Approaches in Deep Learning
Components for Autonomous Vehicle Applications [0.0]
Key factor for ensuring safety in Autonomous Vehicles (AVs) is to avoid any abnormal behaviors under undesirable and unpredicted circumstances.
Different methods for uncertainty quantification have recently been proposed to measure the inevitable source of errors in data and models.
These methods require a higher computational load, a higher memory footprint, and introduce extra latency, which can be prohibitive in safety-critical applications.
arXiv Detail & Related papers (2020-06-26T18:55:10Z) - Can Autonomous Vehicles Identify, Recover From, and Adapt to
Distribution Shifts? [104.04999499189402]
Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment.
We propose an uncertainty-aware planning method, called emphrobust imitative planning (RIP)
Our method can detect and recover from some distribution shifts, reducing the overconfident and catastrophic extrapolations in OOD scenes.
We introduce an autonomous car novel-scene benchmark, textttCARNOVEL, to evaluate the robustness of driving agents to a suite of tasks with distribution shifts.
arXiv Detail & Related papers (2020-06-26T11:07:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.