Learning Stability Attention in Vision-based End-to-end Driving Policies
- URL: http://arxiv.org/abs/2304.02733v1
- Date: Wed, 5 Apr 2023 20:31:10 GMT
- Title: Learning Stability Attention in Vision-based End-to-end Driving Policies
- Authors: Tsun-Hsuan Wang, Wei Xiao, Makram Chahine, Alexander Amini, Ramin
Hasani, Daniela Rus
- Abstract summary: We propose to leverage control Lyapunov functions (CLFs) to equip end-to-end vision-based policies with stability properties.
We present an uncertainty propagation technique that is tightly integrated into att-CLFs.
- Score: 100.57791628642624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern end-to-end learning systems can learn to explicitly infer control from
perception. However, it is difficult to guarantee stability and robustness for
these systems since they are often exposed to unstructured, high-dimensional,
and complex observation spaces (e.g., autonomous driving from a stream of pixel
inputs). We propose to leverage control Lyapunov functions (CLFs) to equip
end-to-end vision-based policies with stability properties and introduce
stability attention in CLFs (att-CLFs) to tackle environmental changes and
improve learning flexibility. We also present an uncertainty propagation
technique that is tightly integrated into att-CLFs. We demonstrate the
effectiveness of att-CLFs via comparison with classical CLFs, model predictive
control, and vanilla end-to-end learning in a photo-realistic simulator and on
a real full-scale autonomous vehicle.
Related papers
- Empowering Autonomous Driving with Large Language Models: A Safety Perspective [82.90376711290808]
This paper explores the integration of Large Language Models (LLMs) into Autonomous Driving systems.
LLMs are intelligent decision-makers in behavioral planning, augmented with a safety verifier shield for contextual safety learning.
We present two key studies in a simulated environment: an adaptive LLM-conditioned Model Predictive Control (MPC) and an LLM-enabled interactive behavior planning scheme with a state machine.
arXiv Detail & Related papers (2023-11-28T03:13:09Z) - Model-Based Reinforcement Learning with Isolated Imaginations [61.67183143982074]
We propose Iso-Dream++, a model-based reinforcement learning approach.
We perform policy optimization based on the decoupled latent imaginations.
This enables long-horizon visuomotor control tasks to benefit from isolating mixed dynamics sources in the wild.
arXiv Detail & Related papers (2023-03-27T02:55:56Z) - Data-Driven Control with Inherent Lyapunov Stability [3.695480271934742]
We propose Control with Inherent Lyapunov Stability (CoILS) as a method for jointly learning parametric representations of a nonlinear dynamics model and a stabilizing controller from data.
In addition to the stabilizability of the learned dynamics guaranteed by our novel construction, we show that the learned controller stabilizes the true dynamics under certain assumptions on the fidelity of the learned dynamics.
arXiv Detail & Related papers (2023-03-06T14:21:42Z) - Differentiable Control Barrier Functions for Vision-based End-to-End
Autonomous Driving [100.57791628642624]
We introduce a safety guaranteed learning framework for vision-based end-to-end autonomous driving.
We design a learning system equipped with differentiable control barrier functions (dCBFs) that is trained end-to-end by gradient descent.
arXiv Detail & Related papers (2022-03-04T16:14:33Z) - When Does Contrastive Learning Preserve Adversarial Robustness from
Pretraining to Finetuning? [99.4914671654374]
We propose AdvCL, a novel adversarial contrastive pretraining framework.
We show that AdvCL is able to enhance cross-task robustness transferability without loss of model accuracy and finetuning efficiency.
arXiv Detail & Related papers (2021-11-01T17:59:43Z) - Robust Stability of Neural-Network Controlled Nonlinear Systems with
Parametric Variability [2.0199917525888895]
We develop a theory for stability and stabilizability of a class of neural-network controlled nonlinear systems.
For computing such a robust stabilizing NN controller, a stability guaranteed training (SGT) is also proposed.
arXiv Detail & Related papers (2021-09-13T05:09:30Z) - Recurrent Neural Network Controllers Synthesis with Stability Guarantees
for Partially Observed Systems [6.234005265019845]
We consider the important class of recurrent neural networks (RNN) as dynamic controllers for nonlinear uncertain partially-observed systems.
We propose a projected policy gradient method that iteratively enforces the stability conditions in the reparametrized space.
Numerical experiments show that our method learns stabilizing controllers while using fewer samples and achieving higher final performance compared with policy gradient.
arXiv Detail & Related papers (2021-09-08T18:21:56Z) - Closing the Closed-Loop Distribution Shift in Safe Imitation Learning [80.05727171757454]
We treat safe optimization-based control strategies as experts in an imitation learning problem.
We train a learned policy that can be cheaply evaluated at run-time and that provably satisfies the same safety guarantees as the expert.
arXiv Detail & Related papers (2021-02-18T05:11:41Z) - Actor-Critic Reinforcement Learning for Control with Stability Guarantee [9.400585561458712]
Reinforcement Learning (RL) and its integration with deep learning have achieved impressive performance in various robotic control tasks.
However, stability is not guaranteed in model-free RL by solely using data.
We propose an actor-critic RL framework for control which can guarantee closed-loop stability by employing the classic Lyapunov's method in control theory.
arXiv Detail & Related papers (2020-04-29T16:14:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.