Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving
- URL: http://arxiv.org/abs/2203.02401v1
- Date: Fri, 4 Mar 2022 16:14:33 GMT
- Title: Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving
- Authors: Wei Xiao and Tsun-Hsuan Wang and Makram Chahine and Alexander Amini
and Ramin Hasani and Daniela Rus
- Abstract summary: We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
- Score: 100.57791628642624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Guaranteeing safety of perception-based learning systems is challenging due
to the absence of ground-truth state information unlike in state-aware control
scenarios. In this paper, we introduce a safety guaranteed learning framework
for vision-based end-to-end autonomous driving. To this end, we design a
learning system equipped with differentiable control barrier functions (dCBFs)
that is trained end-to-end by gradient descent. Our models are composed of
conventional neural network architectures and dCBFs. They are interpretable at
scale, achieve great test performance under limited training data, and are
safety guaranteed in a series of autonomous driving scenarios such as lane
keeping and obstacle avoidance. We evaluated our framework in a sim-to-real
environment, and tested on a real autonomous car, achieving safe lane following
and obstacle avoidance via Augmented Reality (AR) and real parked vehicles.
Related papers
- Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Safety-aware Causal Representation for Trustworthy Offline Reinforcement
Learning in Autonomous Driving [33.672722472758636]
offline Reinforcement Learning(RL) approaches exhibit notable efficacy in addressing sequential decision-making problems from offline datasets.
We introduce the saFety-aware strUctured Scenario representatION ( Fusion) to facilitate the learning of a generalizable end-to-end driving policy.
Empirical evidence in various driving scenarios attests that Fusion significantly enhances the safety and generalizability of autonomous driving agents.
arXiv Detail & Related papers (2023-10-31T18:21:24Z) - Evaluation of Safety Constraints in Autonomous Navigation with Deep
Reinforcement Learning [62.997667081978825]
We compare two learnable navigation policies: safe and unsafe.
The safe policy takes the constraints into the account, while the other does not.
We show that the safe policy is able to generate trajectories with more clearance (distance to the obstacles) and makes less collisions while training without sacrificing the overall performance.
arXiv Detail & Related papers (2023-07-27T01:04:57Z) - Learning Stability Attention in Vision-based End-to-end Driving Policies [100.57791628642624]
We propose to leverage control Lyapunov functions (CLFs) to equip end-to-end vision-based policies with stability properties.
We present an uncertainty propagation technique that is tightly integrated into att-CLFs.
arXiv Detail & Related papers (2023-04-05T20:31:10Z) - ScaTE: A Scalable Framework for Self-Supervised Traversability
Estimation in Unstructured Environments [7.226357394861987]
In this work, we introduce a scalable framework for learning self-supervised traversability.
We train a neural network that predicts the proprioceptive experience that a vehicle would undergo from 3D point clouds.
With driving data of various vehicles gathered from simulation and the real world, we show that our framework is capable of learning the self-supervised traversability of various vehicles.
arXiv Detail & Related papers (2022-09-14T09:52:26Z) - Tackling Real-World Autonomous Driving using Deep Reinforcement Learning [63.3756530844707]
In this work, we propose a model-free Deep Reinforcement Learning Planner training a neural network that predicts acceleration and steering angle.
In order to deploy the system on board the real self-driving car, we also develop a module represented by a tiny neural network.
arXiv Detail & Related papers (2022-07-05T16:33:20Z) - BarrierNet: A Safety-Guaranteed Layer for Neural Networks [50.86816322277293]
BarrierNet allows the safety constraints of a neural controller be adaptable to changing environments.
We evaluate them on a series of control problems such as traffic merging and robot navigations in 2D and 3D space.
arXiv Detail & Related papers (2021-11-22T15:38:11Z) - Weakly Supervised Reinforcement Learning for Autonomous Highway Driving
via Virtual Safety Cages [42.57240271305088]
We present a reinforcement learning based approach to autonomous vehicle longitudinal control, where the rule-based safety cages provide enhanced safety for the vehicle as well as weak supervision to the reinforcement learning agent.
We show that when the model parameters are constrained or sub-optimal, the safety cages can enable a model to learn a safe driving policy even when the model could not be trained to drive through reinforcement learning alone.
arXiv Detail & Related papers (2021-03-17T15:30:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.