Model Error Propagation via Learned Contraction Metrics for Safe
Feedback Motion Planning of Unknown Systems
- URL: http://arxiv.org/abs/2104.08695v1
- Date: Sun, 18 Apr 2021 03:34:00 GMT
- Title: Model Error Propagation via Learned Contraction Metrics for Safe
Feedback Motion Planning of Unknown Systems
- Authors: Glen Chou, Necmiye Ozay, and Dmitry Berenson
- Abstract summary: We present a method for contraction-based feedback motion planning of locally incrementally stabilizable systems with unknown dynamics.
Given a dynamics dataset, our method learns a deep control-affine approximation of the dynamics.
We show results on a 4D car, a 6D quadrotor, and a 22D deformable object manipulation task, showing our method plans safely with learned models of high-dimensional underactuated systems.
- Score: 4.702729080310267
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a method for contraction-based feedback motion planning of locally
incrementally exponentially stabilizable systems with unknown dynamics that
provides probabilistic safety and reachability guarantees. Given a dynamics
dataset, our method learns a deep control-affine approximation of the dynamics.
To find a trusted domain where this model can be used for planning, we obtain
an estimate of the Lipschitz constant of the model error, which is valid with a
given probability, in a region around the training data, providing a local,
spatially-varying model error bound. We derive a trajectory tracking error
bound for a contraction-based controller that is subjected to this model error,
and then learn a controller that optimizes this tracking bound. With a given
probability, we verify the correctness of the controller and tracking error
bound in the trusted domain. We then use the trajectory error bound together
with the trusted domain to guide a sampling-based planner to return
trajectories that can be robustly tracked in execution. We show results on a 4D
car, a 6D quadrotor, and a 22D deformable object manipulation task, showing our
method plans safely with learned models of high-dimensional underactuated
systems, while baselines that plan without considering the tracking error bound
or the trusted domain can fail to stabilize the system and become unsafe.
Related papers
- Model Checking for Closed-Loop Robot Reactive Planning [0.0]
We show how model checking can be used to create multistep plans for a differential drive wheeled robot so that it can avoid immediate danger.
Using a small, purpose built model checking algorithm in situ we generate plans in real-time in a way that reflects the egocentric reactive response of simple biological agents.
arXiv Detail & Related papers (2023-11-16T11:02:29Z) - Uncertainty-Aware AB3DMOT by Variational 3D Object Detection [74.8441634948334]
Uncertainty estimation is an effective tool to provide statistically accurate predictions.
In this paper, we propose a Variational Neural Network-based TANet 3D object detector to generate 3D object detections with uncertainty.
arXiv Detail & Related papers (2023-02-12T14:30:03Z) - Statistical Safety and Robustness Guarantees for Feedback Motion
Planning of Unknown Underactuated Stochastic Systems [1.0323063834827415]
We propose a sampling-based planner that uses the mean dynamics model and simultaneously bounds the closed-loop tracking error via a learned disturbance bound.
We validate that our guarantees translate to empirical safety in simulation on a 10D quadrotor, and in the real world on a physical CrazyFlie quadrotor and Clearpath Jackal robot.
arXiv Detail & Related papers (2022-12-13T19:38:39Z) - Safe Output Feedback Motion Planning from Images via Learned Perception
Modules and Contraction Theory [6.950510860295866]
We present a class of uncertain control-affine nonlinear systems which guarantees runtime safety and goal reachability.
We train a perception system that seeks to invert a subset of the state from an observation, and estimate an upper bound on the perception error.
Next, we use contraction theory to design a stabilizing state feedback controller and a convergent dynamic state observer.
We derive a bound on the trajectory tracking error when this controller is subjected to errors in the dynamics and incorrect state estimates.
arXiv Detail & Related papers (2022-06-14T02:03:27Z) - Trajectory Forecasting from Detection with Uncertainty-Aware Motion
Encoding [121.66374635092097]
Trajectories obtained from object detection and tracking are inevitably noisy.
We propose a trajectory predictor directly based on detection results without relying on explicitly formed trajectories.
arXiv Detail & Related papers (2022-02-03T09:09:56Z) - Monitoring Model Deterioration with Explainable Uncertainty Estimation
via Non-parametric Bootstrap [0.0]
Monitoring machine learning models once they are deployed is challenging.
It is even more challenging to decide when to retrain models in real-case scenarios when labeled data is beyond reach.
In this work, we use non-parametric bootstrapped uncertainty estimates and SHAP values to provide explainable uncertainty estimation.
arXiv Detail & Related papers (2022-01-27T17:23:04Z) - Guaranteed Trajectory Tracking under Learned Dynamics with Contraction Metrics and Disturbance Estimation [5.147919654191323]
This paper presents an approach to trajectory-centric learning control based on contraction metrics and disturbance estimation.
The proposed framework is validated on a planar quadrotor example.
arXiv Detail & Related papers (2021-12-15T15:57:33Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Chance-Constrained Trajectory Optimization for Safe Exploration and
Learning of Nonlinear Systems [81.7983463275447]
Learning-based control algorithms require data collection with abundant supervision for training.
We present a new approach for optimal motion planning with safe exploration that integrates chance-constrained optimal control with dynamics learning and feedback control.
arXiv Detail & Related papers (2020-05-09T05:57:43Z) - Learning Control Barrier Functions from Expert Demonstrations [69.23675822701357]
We propose a learning based approach to safe controller synthesis based on control barrier functions (CBFs)
We analyze an optimization-based approach to learning a CBF that enjoys provable safety guarantees under suitable Lipschitz assumptions on the underlying dynamical system.
To the best of our knowledge, these are the first results that learn provably safe control barrier functions from data.
arXiv Detail & Related papers (2020-04-07T12:29:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.