Autonomous Driving With Perception Uncertainties: Deep-Ensemble Based Adaptive Cruise Control
- URL: http://arxiv.org/abs/2403.15577v1
- Date: Fri, 22 Mar 2024 19:04:58 GMT
- Title: Autonomous Driving With Perception Uncertainties: Deep-Ensemble Based Adaptive Cruise Control
- Authors: Xiao Li, H. Eric Tseng, Anouck Girard, Ilya Kolmanovsky,
- Abstract summary: Advanced perception systems utilizing black-box Deep Neural Networks (DNNs) demonstrate human-like comprehension.
Unpredictable behavior and lack of interpretability may hinder their deployment in safety critical scenarios.
- Score: 6.492311803411367
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous driving depends on perception systems to understand the environment and to inform downstream decision-making. While advanced perception systems utilizing black-box Deep Neural Networks (DNNs) demonstrate human-like comprehension, their unpredictable behavior and lack of interpretability may hinder their deployment in safety critical scenarios. In this paper, we develop an Ensemble of DNN regressors (Deep Ensemble) that generates predictions with quantification of prediction uncertainties. In the scenario of Adaptive Cruise Control (ACC), we employ the Deep Ensemble to estimate distance headway to the lead vehicle from RGB images and enable the downstream controller to account for the estimation uncertainty. We develop an adaptive cruise controller that utilizes Stochastic Model Predictive Control (MPC) with chance constraints to provide a probabilistic safety guarantee. We evaluate our ACC algorithm using a high-fidelity traffic simulator and a real-world traffic dataset and demonstrate the ability of the proposed approach to effect speed tracking and car following while maintaining a safe distance headway. The out-of-distribution scenarios are also examined.
Related papers
- INSIGHT: Enhancing Autonomous Driving Safety through Vision-Language Models on Context-Aware Hazard Detection and Edge Case Evaluation [7.362380225654904]
INSIGHT is a hierarchical vision-language model (VLM) framework designed to enhance hazard detection and edge-case evaluation.
By using multimodal data fusion, our approach integrates semantic and visual representations, enabling precise interpretation of driving scenarios.
Experimental results on the BDD100K dataset demonstrate a substantial improvement in hazard prediction straightforwardness and accuracy over existing models.
arXiv Detail & Related papers (2025-02-01T01:43:53Z) - Safe Adaptive Cruise Control Under Perception Uncertainty: A Deep Ensemble and Conformal Tube Model Predictive Control Approach [5.740554452832947]
This paper considers a Deep Ensemble of Deep Neural Network regressors integrated with Conformal Prediction to predict and quantify uncertainties.
An adaptive cruise controller using Conformal Tube Model Predictive Control is designed to ensure probabilistic safety.
Evaluations with a high-fidelity simulator demonstrate the algorithm's effectiveness in speed tracking and safe distance maintaining.
arXiv Detail & Related papers (2024-12-05T01:01:53Z) - Automatic AI controller that can drive with confidence: steering vehicle with uncertainty knowledge [3.131134048419781]
This research focuses on the development of a vehicle's lateral control system using a machine learning framework.
We employ a Bayesian Neural Network (BNN), a probabilistic learning model, to address uncertainty quantification.
By establishing a confidence threshold, we can trigger manual intervention, ensuring that control is relinquished from the algorithm when it operates outside of safe parameters.
arXiv Detail & Related papers (2024-04-24T23:22:37Z) - QuAD: Query-based Interpretable Neural Motion Planning for Autonomous Driving [33.609780917199394]
Self-driving vehicles must understand its environment to determine appropriate action.
Traditional systems rely on object detection to find agents in the scene.
We present a unified, interpretable, and efficient autonomy framework that moves away from cascading modules that first perceive occupancy relevant-temporal autonomy.
arXiv Detail & Related papers (2024-04-01T21:11:43Z) - Runtime Stealthy Perception Attacks against DNN-based Adaptive Cruise Control Systems [8.561553195784017]
This paper evaluates the security of the deep neural network based ACC systems under runtime perception attacks.
We present a context-aware strategy for the selection of the most critical times for triggering the attacks.
We evaluate the effectiveness of the proposed attack using an actual vehicle, a publicly available driving dataset, and a realistic simulation platform.
arXiv Detail & Related papers (2023-07-18T03:12:03Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Control-Aware Prediction Objectives for Autonomous Driving [78.19515972466063]
We present control-aware prediction objectives (CAPOs) to evaluate the downstream effect of predictions on control without requiring the planner be differentiable.
We propose two types of importance weights that weight the predictive likelihood: one using an attention model between agents, and another based on control variation when exchanging predicted trajectories for ground truth trajectories.
arXiv Detail & Related papers (2022-04-28T07:37:21Z) - R4Dyn: Exploring Radar for Self-Supervised Monocular Depth Estimation of
Dynamic Scenes [69.6715406227469]
Self-supervised monocular depth estimation in driving scenarios has achieved comparable performance to supervised approaches.
We present R4Dyn, a novel set of techniques to use cost-efficient radar data on top of a self-supervised depth estimation framework.
arXiv Detail & Related papers (2021-08-10T17:57:03Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - IntentNet: Learning to Predict Intention from Raw Sensor Data [86.74403297781039]
In this paper, we develop a one-stage detector and forecaster that exploits both 3D point clouds produced by a LiDAR sensor as well as dynamic maps of the environment.
Our multi-task model achieves better accuracy than the respective separate modules while saving computation, which is critical to reducing reaction time in self-driving applications.
arXiv Detail & Related papers (2021-01-20T00:31:52Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.