Defensive Perception: Estimation and Monitoring of Neural Network
Performance under Deployment
- URL: http://arxiv.org/abs/2308.06299v1
- Date: Fri, 11 Aug 2023 07:45:36 GMT
- Title: Defensive Perception: Estimation and Monitoring of Neural Network
Performance under Deployment
- Authors: Hendrik Vogt, Stefan Buehler, Mark Schutera
- Abstract summary: We propose a method for addressing the issue of unnoticed catastrophic deployment and domain shift in neural networks for semantic segmentation in autonomous driving.
Our approach is based on the idea that deep learning-based perception for autonomous driving is uncertain and best represented as a probability distribution.
- Score: 0.6982738885923204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a method for addressing the issue of unnoticed
catastrophic deployment and domain shift in neural networks for semantic
segmentation in autonomous driving. Our approach is based on the idea that deep
learning-based perception for autonomous driving is uncertain and best
represented as a probability distribution. As autonomous vehicles' safety is
paramount, it is crucial for perception systems to recognize when the vehicle
is leaving its operational design domain, anticipate hazardous uncertainty, and
reduce the performance of the perception system. To address this, we propose to
encapsulate the neural network under deployment within an uncertainty
estimation envelope that is based on the epistemic uncertainty estimation
through the Monte Carlo Dropout approach. This approach does not require
modification of the deployed neural network and guarantees expected model
performance. Our defensive perception envelope has the capability to estimate a
neural network's performance, enabling monitoring and notification of entering
domains of reduced neural network performance under deployment. Furthermore,
our envelope is extended by novel methods to improve the application in
deployment settings, including reducing compute expenses and confining
estimation noise. Finally, we demonstrate the applicability of our method for
multiple different potential deployment shifts relevant to autonomous driving,
such as transitions into the night, rainy, or snowy domain. Overall, our
approach shows great potential for application in deployment settings and
enables operational design domain recognition via uncertainty, which allows for
defensive perception, safe state triggers, warning notifications, and feedback
for testing or development and adaptation of the perception stack.
Related papers
- Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Implicit Occupancy Flow Fields for Perception and Prediction in
Self-Driving [68.95178518732965]
A self-driving vehicle (SDV) must be able to perceive its surroundings and predict the future behavior of other traffic participants.
Existing works either perform object detection followed by trajectory of the detected objects, or predict dense occupancy and flow grids for the whole scene.
This motivates our unified approach to perception and future prediction that implicitly represents occupancy and flow over time with a single neural network.
arXiv Detail & Related papers (2023-08-02T23:39:24Z) - Boosting Adversarial Robustness using Feature Level Stochastic Smoothing [46.86097477465267]
adversarial defenses have led to a significant improvement in the robustness of Deep Neural Networks.
In this work, we propose a generic method for introducingity in the network predictions.
We also utilize this for smoothing decision rejecting low confidence predictions.
arXiv Detail & Related papers (2023-06-10T15:11:24Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - End-to-end Uncertainty-based Mitigation of Adversarial Attacks to
Automated Lane Centering [12.11406399284803]
We propose an end-to-end approach that addresses the impact of adversarial attacks throughout perception, planning, and control modules.
Our approach can effectively mitigate the impact of adversarial attacks and can achieve 55% to 90% improvement over the original OpenPilot.
arXiv Detail & Related papers (2021-02-27T22:36:32Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Ramifications of Approximate Posterior Inference for Bayesian Deep
Learning in Adversarial and Out-of-Distribution Settings [7.476901945542385]
We show that Bayesian deep learning models on certain occasions marginally outperform conventional neural networks.
Preliminary investigations indicate the potential inherent role of bias due to choices of initialisation, architecture or activation functions.
arXiv Detail & Related papers (2020-09-03T16:58:15Z) - Can Autonomous Vehicles Identify, Recover From, and Adapt to
Distribution Shifts? [104.04999499189402]
Out-of-training-distribution (OOD) scenarios are a common challenge of learning agents at deployment.
We propose an uncertainty-aware planning method, called emphrobust imitative planning (RIP)
Our method can detect and recover from some distribution shifts, reducing the overconfident and catastrophic extrapolations in OOD scenes.
We introduce an autonomous car novel-scene benchmark, textttCARNOVEL, to evaluate the robustness of driving agents to a suite of tasks with distribution shifts.
arXiv Detail & Related papers (2020-06-26T11:07:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.