Smooth Model Predictive Control with Applications to Statistical
Learning
- URL: http://arxiv.org/abs/2306.01914v1
- Date: Fri, 2 Jun 2023 20:43:38 GMT
- Title: Smooth Model Predictive Control with Applications to Statistical
Learning
- Authors: Kwangjun Ahn, Daniel Pfrommer, Jack Umenberger, Tobia Marcucci, Zak
Mhammedi and Ali Jadbabaie
- Abstract summary: We study smooth approximations of linear Model Predictive Control (MPC) policies, in which hard constraints are replaced by barrier functions.
In particular, we show that barrier MPC inherits the exponential stability properties of the original non-smooth MPC policy.
- Score: 19.06936620903542
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Statistical learning theory and high dimensional statistics have had a
tremendous impact on Machine Learning theory and have impacted a variety of
domains including systems and control theory. Over the past few years we have
witnessed a variety of applications of such theoretical tools to help answer
questions such as: how many state-action pairs are needed to learn a static
control policy to a given accuracy? Recent results have shown that continuously
differentiable and stabilizing control policies can be well-approximated using
neural networks with hard guarantees on performance, yet often even the
simplest constrained control problems are not smooth. To address this void, in
this paper we study smooth approximations of linear Model Predictive Control
(MPC) policies, in which hard constraints are replaced by barrier functions,
a.k.a. barrier MPC. In particular, we show that barrier MPC inherits the
exponential stability properties of the original non-smooth MPC policy. Using a
careful analysis of the proposed barrier MPC, we show that its smoothness
constant can be carefully controlled, thereby paving the way for new sample
complexity results for approximating MPC policies from sampled state-action
pairs.
Related papers
- Learning Optimal Deterministic Policies with Stochastic Policy Gradients [62.81324245896716]
Policy gradient (PG) methods are successful approaches to deal with continuous reinforcement learning (RL) problems.
In common practice, convergence (hyper)policies are learned only to deploy their deterministic version.
We show how to tune the exploration level used for learning to optimize the trade-off between the sample complexity and the performance of the deployed deterministic policy.
arXiv Detail & Related papers (2024-05-03T16:45:15Z) - MPC-Inspired Reinforcement Learning for Verifiable Model-Free Control [5.9867297878688195]
We introduce a new class of parameterized controllers, drawing inspiration from Model Predictive Control (MPC)
The controller resembles a Quadratic Programming (QP) solver of a linear MPC problem, with the parameters of the controller being trained via Deep Reinforcement Learning (DRL)
The proposed controller is significantly more computationally efficient compared to MPC and requires fewer parameters to learn than controllers.
arXiv Detail & Related papers (2023-12-08T19:33:22Z) - Conformal Policy Learning for Sensorimotor Control Under Distribution
Shifts [61.929388479847525]
This paper focuses on the problem of detecting and reacting to changes in the distribution of a sensorimotor controller's observables.
The key idea is the design of switching policies that can take conformal quantiles as input.
We show how to design such policies by using conformal quantiles to switch between base policies with different characteristics.
arXiv Detail & Related papers (2023-11-02T17:59:30Z) - Adaptive Stochastic MPC under Unknown Noise Distribution [19.03553854357296]
We address the MPC problem for linear systems, subject to chance state constraints and hard input constraints, under unknown noise distribution.
We design a distributionally robust and robustly stable benchmark SMPC algorithm for the ideal setting of known noise statistics.
We employ this benchmark controller to derive a novel adaptive SMPC scheme that learns the necessary noise statistics online.
arXiv Detail & Related papers (2022-04-03T16:35:18Z) - Evaluating model-based planning and planner amortization for continuous
control [79.49319308600228]
We take a hybrid approach, combining model predictive control (MPC) with a learned model and model-free policy learning.
We find that well-tuned model-free agents are strong baselines even for high DoF control problems.
We show that it is possible to distil a model-based planner into a policy that amortizes the planning without any loss of performance.
arXiv Detail & Related papers (2021-10-07T12:00:40Z) - Imitation Learning from MPC for Quadrupedal Multi-Gait Control [63.617157490920505]
We present a learning algorithm for training a single policy that imitates multiple gaits of a walking robot.
We use and extend MPC-Net, which is an Imitation Learning approach guided by Model Predictive Control.
We validate our approach on hardware and show that a single learned policy can replace its teacher to control multiple gaits.
arXiv Detail & Related papers (2021-03-26T08:48:53Z) - Stein Variational Model Predictive Control [130.60527864489168]
Decision making under uncertainty is critical to real-world, autonomous systems.
Model Predictive Control (MPC) methods have demonstrated favorable performance in practice, but remain limited when dealing with complex distributions.
We show that this framework leads to successful planning in challenging, non optimal control problems.
arXiv Detail & Related papers (2020-11-15T22:36:59Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Heteroscedastic Bayesian Optimisation for Stochastic Model Predictive
Control [23.180330602334223]
Model predictive control (MPC) has been successful in applications involving the control of complex physical systems.
We investigate fine-tuning MPC methods in the context of MPC, which presents extra challenges due to the randomness of the controller's actions.
arXiv Detail & Related papers (2020-10-01T05:31:41Z) - Learning Constrained Adaptive Differentiable Predictive Control Policies
With Guarantees [1.1086440815804224]
We present differentiable predictive control (DPC), a method for learning constrained neural control policies for linear systems.
We employ automatic differentiation to obtain direct policy gradients by backpropagating the model predictive control (MPC) loss function and constraints penalties through a differentiable closed-loop system dynamics model.
arXiv Detail & Related papers (2020-04-23T14:24:44Z) - ABC-LMPC: Safe Sample-Based Learning MPC for Stochastic Nonlinear
Dynamical Systems with Adjustable Boundary Conditions [34.44010424789202]
We present a novel LMPC algorithm, Adjustable Boundary LMPC (ABC-LMPC), which enables rapid adaptation to novel start and goal configurations.
We experimentally demonstrate that the resulting controller adapts to a variety of initial and terminal conditions on 3 continuous control tasks.
arXiv Detail & Related papers (2020-03-03T09:48:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.