Safe Beyond the Horizon: Efficient Sampling-based MPC with Neural Control Barrier Functions
- URL: http://arxiv.org/abs/2502.15006v1
- Date: Thu, 20 Feb 2025 19:59:11 GMT
- Title: Safe Beyond the Horizon: Efficient Sampling-based MPC with Neural Control Barrier Functions
- Authors: Ji Yin, Oswin So, Eric Yang Yu, Chuchu Fan, Panagiotis Tsiotras,
- Abstract summary: A common problem when using model predictive control (MPC) in practice is the satisfaction of safety specifications beyond the prediction horizon.<n>We propose a new sampling strategy that greatly reduces the variance of the estimated optimal control.<n>The resulting Neural Shield-VIMPC controller yields substantial safety improvements compared to existing sampling-based MPC controllers.
- Score: 23.693610702522236
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A common problem when using model predictive control (MPC) in practice is the satisfaction of safety specifications beyond the prediction horizon. While theoretical works have shown that safety can be guaranteed by enforcing a suitable terminal set constraint or a sufficiently long prediction horizon, these techniques are difficult to apply and thus are rarely used by practitioners, especially in the case of general nonlinear dynamics. To solve this problem, we impose a tradeoff between exact recursive feasibility, computational tractability, and applicability to ''black-box'' dynamics by learning an approximate discrete-time control barrier function and incorporating it into a variational inference MPC (VIMPC), a sampling-based MPC paradigm. To handle the resulting state constraints, we further propose a new sampling strategy that greatly reduces the variance of the estimated optimal control, improving the sample efficiency, and enabling real-time planning on a CPU. The resulting Neural Shield-VIMPC (NS-VIMPC) controller yields substantial safety improvements compared to existing sampling-based MPC controllers, even under badly designed cost functions. We validate our approach in both simulation and real-world hardware experiments.
Related papers
- Safe Learning-Based Optimization of Model Predictive Control: Application to Battery Fast-Charging [0.0]
We discuss an approach that integrates model predictive control with safe Bayesian optimization to optimize long-term closed-loop performance.
This work extends previous research by emphasizing closed-loop constraint satisfaction.
As a practical application, we apply our approach to fast charging of lithium-ion batteries.
arXiv Detail & Related papers (2024-10-07T12:23:40Z) - Towards safe and tractable Gaussian process-based MPC: Efficient sampling within a sequential quadratic programming framework [35.79393879150088]
We propose a robust GP-MPC formulation that guarantees constraint satisfaction with high probability.
We highlight the improved reachable set approximation compared to existing methods, as well as real-time feasible times.
arXiv Detail & Related papers (2024-09-13T08:15:20Z) - Automatically Adaptive Conformal Risk Control [49.95190019041905]
We propose a methodology for achieving approximate conditional control of statistical risks by adapting to the difficulty of test samples.
Our framework goes beyond traditional conditional risk control based on user-provided conditioning events to the algorithmic, data-driven determination of appropriate function classes for conditioning.
arXiv Detail & Related papers (2024-06-25T08:29:32Z) - On the Sample Complexity of Imitation Learning for Smoothed Model Predictive Control [27.609098229134]
We show how a smoothed expert can be designed for a general class of systems.
We prove on the optimality gap of the analytic center associated with a convex Lipschitz function.
arXiv Detail & Related papers (2023-06-02T20:43:38Z) - Robust Control for Dynamical Systems With Non-Gaussian Noise via Formal
Abstractions [59.605246463200736]
We present a novel controller synthesis method that does not rely on any explicit representation of the noise distributions.
First, we abstract the continuous control system into a finite-state model that captures noise by probabilistic transitions between discrete states.
We use state-of-the-art verification techniques to provide guarantees on the interval Markov decision process and compute a controller for which these guarantees carry over to the original control system.
arXiv Detail & Related papers (2023-01-04T10:40:30Z) - Learning Sampling Distributions for Model Predictive Control [36.82905770866734]
Sampling-based approaches to Model Predictive Control (MPC) have become a cornerstone of contemporary approaches to MPC.
We propose to carry out all operations in the latent space, allowing us to take full advantage of the learned distribution.
Specifically, we frame the learning problem as bi-level optimization and show how to train the controller with backpropagation-through-time.
arXiv Detail & Related papers (2022-12-05T20:35:36Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Reinforcement Learning of the Prediction Horizon in Model Predictive
Control [1.536989504296526]
We propose to learn the optimal prediction horizon as a function of the state using reinforcement learning (RL)
We show how the RL learning problem can be formulated and test our method on two control tasks, showing clear improvements over the fixed horizon MPC scheme.
arXiv Detail & Related papers (2021-02-22T15:52:32Z) - Closing the Closed-Loop Distribution Shift in Safe Imitation Learning [80.05727171757454]
We treat safe optimization-based control strategies as experts in an imitation learning problem.
We train a learned policy that can be cheaply evaluated at run-time and that provably satisfies the same safety guarantees as the expert.
arXiv Detail & Related papers (2021-02-18T05:11:41Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Neural Lyapunov Model Predictive Control: Learning Safe Global
Controllers from Sub-optimal Examples [4.777323087050061]
In many real-world and industrial applications, it is typical to have an existing control strategy, for instance, execution from a human operator.
The objective of this work is to improve upon this unknown, safe but suboptimal policy by learning a new controller that retains safety and stability.
The proposed algorithm alternatively learns the terminal cost and updates the MPC parameters according to a stability metric.
arXiv Detail & Related papers (2020-02-21T16:57:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.