Symbolic Perception Risk in Autonomous Driving
- URL: http://arxiv.org/abs/2303.09416v1
- Date: Thu, 16 Mar 2023 15:49:24 GMT
- Title: Symbolic Perception Risk in Autonomous Driving
- Authors: Guangyi Liu, Disha Kamale, Cristian-Ioan Vasile, and Nader Motee
- Abstract summary: We develop a novel framework to assess the risk of misperception in a traffic sign classification task.
We consider the problem in an autonomous driving setting, where visual input quality gradually improves.
We show the closed-form representation of the conditional value-at-risk (CVaR) of misperception.
- Score: 4.371383574272895
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop a novel framework to assess the risk of misperception in a traffic
sign classification task in the presence of exogenous noise. We consider the
problem in an autonomous driving setting, where visual input quality gradually
improves due to improved resolution, and less noise since the distance to
traffic signs decreases. Using the estimated perception statistics obtained
using the standard classification algorithms, we aim to quantify the risk of
misperception to mitigate the effects of imperfect visual observation. By
exploring perception outputs, their expected high-level actions, and potential
costs, we show the closed-form representation of the conditional value-at-risk
(CVaR) of misperception. Several case studies support the effectiveness of our
proposed methodology.
Related papers
- Potential Field as Scene Affordance for Behavior Change-Based Visual Risk Object Identification [4.896236083290351]
We study behavior change-based visual risk object identification (Visual-ROI)
Existing methods often show significant limitations in spatial accuracy and temporal consistency.
We propose a new framework with a Bird's Eye View representation to overcome these challenges.
arXiv Detail & Related papers (2024-09-24T08:17:50Z) - Heteroscedastic Uncertainty Estimation Framework for Unsupervised Registration [32.081258147692395]
We propose a framework for heteroscedastic image uncertainty estimation.
It can adaptively reduce the influence of regions with high uncertainty during unsupervised registration.
Our method consistently outperforms baselines and produces sensible uncertainty estimates.
arXiv Detail & Related papers (2023-12-01T01:03:06Z) - Defensive Perception: Estimation and Monitoring of Neural Network
Performance under Deployment [0.6982738885923204]
We propose a method for addressing the issue of unnoticed catastrophic deployment and domain shift in neural networks for semantic segmentation in autonomous driving.
Our approach is based on the idea that deep learning-based perception for autonomous driving is uncertain and best represented as a probability distribution.
arXiv Detail & Related papers (2023-08-11T07:45:36Z) - A Counterfactual Safety Margin Perspective on the Scoring of Autonomous
Vehicles' Riskiness [52.27309191283943]
This paper presents a data-driven framework for assessing the risk of different AVs' behaviors.
We propose the notion of counterfactual safety margin, which represents the minimum deviation from nominal behavior that could cause a collision.
arXiv Detail & Related papers (2023-08-02T09:48:08Z) - Intersection Warning System for Occlusion Risks using Relational Local
Dynamic Maps [0.0]
This work addresses the task of risk evaluation in traffic scenarios with limited observability due to restricted sensorial coverage.
To identify the area of sight, we employ ray casting on a local dynamic map providing geometrical information and road infrastructure.
Resulting risk indicators are utilized to evaluate the driver's current behavior, to warn the driver in critical situations, to give suggestions on how to act safely or to plan safe trajectories.
arXiv Detail & Related papers (2023-03-13T16:01:55Z) - Architectural patterns for handling runtime uncertainty of data-driven
models in safety-critical perception [1.7616042687330642]
We present additional architectural patterns for handling uncertainty estimation.
We evaluate the four patterns qualitatively and quantitatively with respect to safety and performance gains.
We conclude that the consideration of context information of the driving situation makes it possible to accept more or less uncertainty depending on the inherent risk of the situation.
arXiv Detail & Related papers (2022-06-14T13:31:36Z) - Risk-Driven Design of Perception Systems [47.787943101699966]
It is important that we design perception systems to minimize errors that reduce the overall safety of the system.
We develop a risk-driven approach to designing perception systems that accounts for the effect of perceptual errors on the performance of the fully-integrated, closed-loop system.
We evaluate our techniques on a realistic vision-based aircraft detect and avoid application and show that risk-driven design reduces collision risk by 37% over a baseline system.
arXiv Detail & Related papers (2022-05-21T21:14:56Z) - On the Robustness of Quality Measures for GANs [136.18799984346248]
This work evaluates the robustness of quality measures of generative models such as Inception Score (IS) and Fr'echet Inception Distance (FID)
We show that such metrics can also be manipulated by additive pixel perturbations.
arXiv Detail & Related papers (2022-01-31T06:43:09Z) - Residual Error: a New Performance Measure for Adversarial Robustness [85.0371352689919]
A major challenge that limits the wide-spread adoption of deep learning has been their fragility to adversarial attacks.
This study presents the concept of residual error, a new performance measure for assessing the adversarial robustness of a deep neural network.
Experimental results using the case of image classification demonstrate the effectiveness and efficacy of the proposed residual error metric.
arXiv Detail & Related papers (2021-06-18T16:34:23Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Can Autonomous Vehicles Identify, Recover From, and Adapt to
Distribution Shifts? [104.04999499189402]
Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment.
We propose an uncertainty-aware planning method, called emphrobust imitative planning (RIP)
Our method can detect and recover from some distribution shifts, reducing the overconfident and catastrophic extrapolations in OOD scenes.
We introduce an autonomous car novel-scene benchmark, textttCARNOVEL, to evaluate the robustness of driving agents to a suite of tasks with distribution shifts.
arXiv Detail & Related papers (2020-06-26T11:07:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.