DeepGuard: A Framework for Safeguarding Autonomous Driving Systems from
Inconsistent Behavior
- URL: http://arxiv.org/abs/2111.09533v1
- Date: Thu, 18 Nov 2021 06:00:54 GMT
- Title: DeepGuard: A Framework for Safeguarding Autonomous Driving Systems from
Inconsistent Behavior
- Authors: Manzoor Hussain, Nazakat Ali, and Jang-Eui Hong
- Abstract summary: The deep neural networks (DNNs)based autonomous driving systems (ADSs) are expected to reduce road accidents and improve safety in the transportation domain.
DNN based ADS sometimes exhibit erroneous or unexpected behaviors due to unexpected driving conditions which may cause accidents.
This study proposes an autoencoder and time series analysis based anomaly detection system to prevent the safety critical inconsistent behavior of autonomous vehicles at runtime.
- Score: 0.1529342790344802
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The deep neural networks (DNNs)based autonomous driving systems (ADSs) are
expected to reduce road accidents and improve safety in the transportation
domain as it removes the factor of human error from driving tasks. The DNN
based ADS sometimes may exhibit erroneous or unexpected behaviors due to
unexpected driving conditions which may cause accidents. It is not possible to
generalize the DNN model performance for all driving conditions. Therefore, the
driving conditions that were not considered during the training of the ADS may
lead to unpredictable consequences for the safety of autonomous vehicles. This
study proposes an autoencoder and time series analysis based anomaly detection
system to prevent the safety critical inconsistent behavior of autonomous
vehicles at runtime. Our approach called DeepGuard consists of two components.
The first component, the inconsistent behavior predictor, is based on an
autoencoder and time series analysis to reconstruct the driving scenarios.
Based on reconstruction error and threshold it determines the normal and
unexpected driving scenarios and predicts potential inconsistent behavior. The
second component provides on the fly safety guards, that is, it automatically
activates healing strategies to prevent inconsistencies in the behavior. We
evaluated the performance of DeepGuard in predicting the injected anomalous
driving scenarios using already available open sourced DNN based ADSs in the
Udacity simulator. Our simulation results show that the best variant of
DeepGuard can predict up to 93 percent on the CHAUFFEUR ADS, 83 percent on
DAVE2 ADS, and 80 percent of inconsistent behavior on the EPOCH ADS model,
outperforming SELFORACLE and DeepRoad. Overall, DeepGuard can prevent up to 89
percent of all predicted inconsistent behaviors of ADS by executing predefined
safety guards.
Related papers
- Automatic AI controller that can drive with confidence: steering vehicle with uncertainty knowledge [3.131134048419781]
This research focuses on the development of a vehicle's lateral control system using a machine learning framework.
We employ a Bayesian Neural Network (BNN), a probabilistic learning model, to address uncertainty quantification.
By establishing a confidence threshold, we can trigger manual intervention, ensuring that control is relinquished from the algorithm when it operates outside of safe parameters.
arXiv Detail & Related papers (2024-04-24T23:22:37Z) - Physical Backdoor Attack can Jeopardize Driving with Vision-Large-Language Models [53.701148276912406]
Vision-Large-Language-models (VLMs) have great application prospects in autonomous driving.
BadVLMDriver is the first backdoor attack against VLMs for autonomous driving that can be launched in practice using physical objects.
BadVLMDriver achieves a 92% attack success rate in inducing a sudden acceleration when coming across a pedestrian holding a red balloon.
arXiv Detail & Related papers (2024-04-19T14:40:38Z) - REDriver: Runtime Enforcement for Autonomous Vehicles [6.97499033700151]
We propose REDriver, a general and modular approach to runtime enforcement of autonomous driving systems.
ReDriver monitors the planned trajectory of the ADS based on a quantitative semantics of STL.
It uses a gradient-driven algorithm to repair the trajectory when a violation of the specification is likely.
arXiv Detail & Related papers (2024-01-04T13:08:38Z) - Infrastructure-based End-to-End Learning and Prevention of Driver
Failure [68.0478623315416]
FailureNet is a recurrent neural network trained end-to-end on trajectories of both nominal and reckless drivers in a scaled miniature city.
It can accurately identify control failures, upstream perception errors, and speeding drivers, distinguishing them from nominal driving.
Compared to speed or frequency-based predictors, FailureNet's recurrent neural network structure provides improved predictive power, yielding upwards of 84% accuracy when deployed on hardware.
arXiv Detail & Related papers (2023-03-21T22:55:51Z) - AdvDO: Realistic Adversarial Attacks for Trajectory Prediction [87.96767885419423]
Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
arXiv Detail & Related papers (2022-09-19T03:34:59Z) - Control-Aware Prediction Objectives for Autonomous Driving [78.19515972466063]
We present control-aware prediction objectives (CAPOs) to evaluate the downstream effect of predictions on control without requiring the planner be differentiable.
We propose two types of importance weights that weight the predictive likelihood: one using an attention model between agents, and another based on control variation when exchanging predicted trajectories for ground truth trajectories.
arXiv Detail & Related papers (2022-04-28T07:37:21Z) - Driving Anomaly Detection Using Conditional Generative Adversarial
Network [26.45460503638333]
This study proposes an unsupervised method to quantify driving anomalies using a conditional generative adversarial network (GAN)
The approach predicts upcoming driving scenarios by conditioning the models on the previously observed signals.
The results are validated with perceptual evaluations, where annotators are asked to assess the risk and familiarity of the videos detected with high anomaly scores.
arXiv Detail & Related papers (2022-03-15T22:10:01Z) - IntentNet: Learning to Predict Intention from Raw Sensor Data [86.74403297781039]
In this paper, we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment.
Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reducing reaction time in self-driving applications.
arXiv Detail & Related papers (2021-01-20T00:31:52Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z) - Towards Safer Self-Driving Through Great PAIN (Physically Adversarial
Intelligent Networks) [3.136861161060885]
We introduce a "Physically Adrial Intelligent Network" (PAIN) wherein self-driving vehicles interact aggressively.
We train two agents, a protagonist and an adversary, using dueling double deep Q networks (DDDQNs) with prioritized experience replay.
The trained protagonist becomes more resilient to environmental uncertainty and less prone to corner case failures.
arXiv Detail & Related papers (2020-03-24T05:04:13Z) - Stop-and-Go: Exploring Backdoor Attacks on Deep Reinforcement
Learning-based Traffic Congestion Control Systems [16.01681914880077]
We explore the backdooring/trojanning of DRL-based AV controllers.
Malicious actions include vehicle deceleration and acceleration to cause stop-and-go traffic waves to emerge.
Experiments show that the backdoored model does not compromise normal operation performance.
arXiv Detail & Related papers (2020-03-17T08:20:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.