Automatic AI controller that can drive with confidence: steering vehicle with uncertainty knowledge
- URL: http://arxiv.org/abs/2404.16893v1
- Date: Wed, 24 Apr 2024 23:22:37 GMT
- Title: Automatic AI controller that can drive with confidence: steering vehicle with uncertainty knowledge
- Authors: Neha Kumari, Sumit Kumar. Sneha Priya, Ayush Kumar, Akash Fogla,
- Abstract summary: This research focuses on the development of a vehicle's lateral control system using a machine learning framework.
We employ a Bayesian Neural Network (BNN), a probabilistic learning model, to address uncertainty quantification.
By establishing a confidence threshold, we can trigger manual intervention, ensuring that control is relinquished from the algorithm when it operates outside of safe parameters.
- Score: 3.131134048419781
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In safety-critical systems that interface with the real world, the role of uncertainty in decision-making is pivotal, particularly in the context of machine learning models. For the secure functioning of Cyber-Physical Systems (CPS), it is imperative to manage such uncertainty adeptly. In this research, we focus on the development of a vehicle's lateral control system using a machine learning framework. Specifically, we employ a Bayesian Neural Network (BNN), a probabilistic learning model, to address uncertainty quantification. This capability allows us to gauge the level of confidence or uncertainty in the model's predictions. The BNN based controller is trained using simulated data gathered from the vehicle traversing a single track and subsequently tested on various other tracks. We want to share two significant results: firstly, the trained model demonstrates the ability to adapt and effectively control the vehicle on multiple similar tracks. Secondly, the quantification of prediction confidence integrated into the controller serves as an early-warning system, signaling when the algorithm lacks confidence in its predictions and is therefore susceptible to failure. By establishing a confidence threshold, we can trigger manual intervention, ensuring that control is relinquished from the algorithm when it operates outside of safe parameters.
Related papers
- Collision Probability Distribution Estimation via Temporal Difference Learning [0.46085106405479537]
We introduce CollisionPro, a pioneering framework designed to estimate cumulative collision probability distributions.
We formulate our framework within the context of reinforcement learning to pave the way for safety-aware agents.
A comprehensive examination of our framework is conducted using a realistic autonomous driving simulator.
arXiv Detail & Related papers (2024-07-29T13:32:42Z) - Safe Navigation in Unstructured Environments by Minimizing Uncertainty
in Control and Perception [5.46262127926284]
Uncertainty in control and perception poses challenges for autonomous vehicle navigation in unstructured environments.
This paper introduces a framework that minimizes control and perception uncertainty to ensure safe and reliable navigation.
arXiv Detail & Related papers (2023-06-26T11:24:03Z) - Self-Aware Trajectory Prediction for Safe Autonomous Driving [9.868681330733764]
Trajectory prediction is one of the key components of the autonomous driving software stack.
In this paper, a self-aware trajectory prediction method is proposed.
The proposed method performed well in terms of self-awareness, memory footprint, and real-time performance.
arXiv Detail & Related papers (2023-05-16T03:53:23Z) - Vehicle lateral control using Machine Learning for automated vehicle
guidance [0.0]
Uncertainty in decision-making is crucial in the machine learning model used for a safety-critical system.
In this work, we design a vehicle's lateral controller using a machine-learning model.
arXiv Detail & Related papers (2023-03-14T19:14:24Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Is my Driver Observation Model Overconfident? Input-guided Calibration
Networks for Reliable and Interpretable Confidence Estimates [23.449073032842076]
Driver observation models are rarely deployed under perfect conditions.
We show that raw neural network-based approaches tend to significantly overestimate their prediction quality.
We introduce Calibrated Action Recognition with Input Guidance (CARING)-a novel approach leveraging an additional neural network to learn scaling the confidences depending on the video representation.
arXiv Detail & Related papers (2022-04-10T12:43:58Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Efficient and Robust LiDAR-Based End-to-End Navigation [132.52661670308606]
We present an efficient and robust LiDAR-based end-to-end navigation framework.
We propose Fast-LiDARNet that is based on sparse convolution kernel optimization and hardware-aware model design.
We then propose Hybrid Evidential Fusion that directly estimates the uncertainty of the prediction from only a single forward pass.
arXiv Detail & Related papers (2021-05-20T17:52:37Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.