Architectural patterns for handling runtime uncertainty of data-driven
models in safety-critical perception
- URL: http://arxiv.org/abs/2206.06838v1
- Date: Tue, 14 Jun 2022 13:31:36 GMT
- Title: Architectural patterns for handling runtime uncertainty of data-driven
models in safety-critical perception
- Authors: Janek Gro{\ss}, Rasmus Adler, Michael Kl\"as, Jan Reich, Lisa
J\"ockel, Roman Gansch
- Abstract summary: We present additional architectural patterns for handling uncertainty estimation.
We evaluate the four patterns qualitatively and quantitatively with respect to safety and performance gains.
We conclude that the consideration of context information of the driving situation makes it possible to accept more or less uncertainty depending on the inherent risk of the situation.
- Score: 1.7616042687330642
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-driven models (DDM) based on machine learning and other AI techniques
play an important role in the perception of increasingly autonomous systems.
Due to the merely implicit definition of their behavior mainly based on the
data used for training, DDM outputs are subject to uncertainty. This poses a
challenge with respect to the realization of safety-critical perception tasks
by means of DDMs. A promising approach to tackling this challenge is to
estimate the uncertainty in the current situation during operation and adapt
the system behavior accordingly. In previous work, we focused on runtime
estimation of uncertainty and discussed approaches for handling uncertainty
estimations. In this paper, we present additional architectural patterns for
handling uncertainty. Furthermore, we evaluate the four patterns qualitatively
and quantitatively with respect to safety and performance gains. For the
quantitative evaluation, we consider a distance controller for vehicle
platooning where performance gains are measured by considering how much the
distance can be reduced in different operational situations. We conclude that
the consideration of context information of the driving situation makes it
possible to accept more or less uncertainty depending on the inherent risk of
the situation, which results in performance gains.
Related papers
- Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Entropy-Based Uncertainty Modeling for Trajectory Prediction in Autonomous Driving [9.365269316773219]
We adopt a holistic approach that focuses on uncertainty quantification, decomposition, and the influence of model composition.
Our method is based on a theoretically grounded information-theoretic approach to measure uncertainty.
We conduct extensive experiments on the nuScenes dataset to assess how different model architectures and configurations affect uncertainty quantification and model robustness.
arXiv Detail & Related papers (2024-10-02T15:02:32Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Improving Explainable Object-induced Model through Uncertainty for
Automated Vehicles [13.514721609660521]
Recent explainable automated vehicles (AVs) neglect crucial information related to inherent uncertainties while providing explanations for actions.
This study builds upon the "object-induced" model approach that prioritizes the role of objects in scenes for decision-making.
We also explore several advanced training strategies guided by uncertainty, including uncertainty-guided data reweighting and augmentation.
arXiv Detail & Related papers (2024-02-23T19:14:57Z) - Scope Compliance Uncertainty Estimate [0.4262974002462632]
SafeML is a model-agnostic approach for performing such monitoring.
This work addresses these limitations by changing the binary decision to a continuous metric.
arXiv Detail & Related papers (2023-12-17T19:44:20Z) - Measuring the Confidence of Traffic Forecasting Models: Techniques,
Experimental Comparison and Guidelines towards Their Actionability [7.489793155793319]
Uncertainty estimation provides the user with augmented information about the model's confidence in its predicted outcome.
There is a thin consensus around the different types of uncertainty that one can gauge in machine learning models.
This work aims to cover this lack of research by reviewing different techniques and metrics of uncertainty available in the literature.
arXiv Detail & Related papers (2022-10-28T10:49:55Z) - Injecting Planning-Awareness into Prediction and Detection Evaluation [42.228191984697006]
We take a step back and critically assess current evaluation metrics, proposing task-aware metrics as a better measure of performance in systems where they are deployed.
Experiments on an illustrative simulation as well as real-world autonomous driving data validate that our proposed task-aware metrics are able to account for outcome asymmetry and provide a better estimate of a model's closed-loop performance.
arXiv Detail & Related papers (2021-10-07T08:52:48Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Evaluating the Safety of Deep Reinforcement Learning Models using
Semi-Formal Verification [81.32981236437395]
We present a semi-formal verification approach for decision-making tasks based on interval analysis.
Our method obtains comparable results over standard benchmarks with respect to formal verifiers.
Our approach allows to efficiently evaluate safety properties for decision-making models in practical applications.
arXiv Detail & Related papers (2020-10-19T11:18:06Z) - An Uncertainty-based Human-in-the-loop System for Industrial Tool Wear
Analysis [68.8204255655161]
We show that uncertainty measures based on Monte-Carlo dropout in the context of a human-in-the-loop system increase the system's transparency and performance.
A simulation study demonstrates that the uncertainty-based human-in-the-loop system increases performance for different levels of human involvement.
arXiv Detail & Related papers (2020-07-14T15:47:37Z) - On the uncertainty of self-supervised monocular depth estimation [52.13311094743952]
Self-supervised paradigms for monocular depth estimation are very appealing since they do not require ground truth annotations at all.
We explore for the first time how to estimate the uncertainty for this task and how this affects depth accuracy.
We propose a novel peculiar technique specifically designed for self-supervised approaches.
arXiv Detail & Related papers (2020-05-13T09:00:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.