Approximate Robust NMPC using Reinforcement Learning
- URL: http://arxiv.org/abs/2104.02743v1
- Date: Tue, 6 Apr 2021 18:34:58 GMT
- Title: Approximate Robust NMPC using Reinforcement Learning
- Authors: Hossein Nejatbakhsh Esfahani, Arash Bahari Kordabad, Sebastien Gros
- Abstract summary: We present a Reinforcement Learning-based Robust Model Predictive Control (RL-RNMPC) for controlling nonlinear systems in the presence of disturbances and uncertainties.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a Reinforcement Learning-based Robust Nonlinear Model Predictive
Control (RL-RNMPC) framework for controlling nonlinear systems in the presence
of disturbances and uncertainties. An approximate Robust Nonlinear Model
Predictive Control (RNMPC) of low computational complexity is used in which the
state trajectory uncertainty is modelled via ellipsoids. Reinforcement Learning
is then used in order to handle the ellipsoidal approximation and improve the
closed-loop performance of the scheme by adjusting the MPC parameters
generating the ellipsoids. The approach is tested on a simulated Wheeled Mobile
Robot (WMR) tracking a desired trajectory while avoiding static obstacles.
Related papers
- Neural Internal Model Control: Learning a Robust Control Policy via Predictive Error Feedback [16.46487826869775]
We propose a novel framework, Neural Internal Model Control, which integrates model-based control with RL-based control to enhance robustness.
Our framework streamlines the predictive model by applying Newton-Euler equations for rigid-body dynamics, eliminating the need to capture complex high-dimensional nonlinearities.
We demonstrate the effectiveness of our framework on both quadrotors and quadrupedal robots, achieving superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-11-20T07:07:42Z) - Custom Non-Linear Model Predictive Control for Obstacle Avoidance in Indoor and Outdoor Environments [0.0]
This paper introduces a Non-linear Model Predictive Control (NMPC) framework for the DJI Matrice 100.
The framework supports various trajectory types and employs a penalty-based cost function for control accuracy in tight maneuvers.
arXiv Detail & Related papers (2024-10-03T17:50:19Z) - Efficient model predictive control for nonlinear systems modelled by deep neural networks [6.5268245109828005]
This paper presents a model predictive control (MPC) for dynamic systems whose nonlinearity and uncertainty are modelled by deep neural networks (NNs)
Since the NN output contains a high-order complex nonlinearity of the system state and control input, the MPC problem is nonlinear and challenging to solve for real-time control.
arXiv Detail & Related papers (2024-05-16T18:05:18Z) - Parameter-Adaptive Approximate MPC: Tuning Neural-Network Controllers without Retraining [50.00291020618743]
This work introduces a novel, parameter-adaptive AMPC architecture capable of online tuning without recomputing large datasets and retraining.
We showcase the effectiveness of parameter-adaptive AMPC by controlling the swing-ups of two different real cartpole systems with a severely resource-constrained microcontroller (MCU)
Taken together, these contributions represent a marked step toward the practical application of AMPC in real-world systems.
arXiv Detail & Related papers (2024-04-08T20:02:19Z) - Data-driven Nonlinear Model Reduction using Koopman Theory: Integrated
Control Form and NMPC Case Study [56.283944756315066]
We propose generic model structures combining delay-coordinate encoding of measurements and full-state decoding to integrate reduced Koopman modeling and state estimation.
A case study demonstrates that our approach provides accurate control models and enables real-time capable nonlinear model predictive control of a high-purity cryogenic distillation column.
arXiv Detail & Related papers (2024-01-09T11:54:54Z) - Learning Over Contracting and Lipschitz Closed-Loops for
Partially-Observed Nonlinear Systems (Extended Version) [1.2430809884830318]
This paper presents a policy parameterization for learning-based control on nonlinear, partially-observed dynamical systems.
We prove that the resulting Youla-REN parameterization automatically satisfies stability (contraction) and user-tunable robustness (Lipschitz) conditions.
We find that the Youla-REN performs similarly to existing learning-based and optimal control methods while also ensuring stability and exhibiting improved robustness to adversarial disturbances.
arXiv Detail & Related papers (2023-04-12T23:55:56Z) - Deep Learning Explicit Differentiable Predictive Control Laws for
Buildings [1.4121977037543585]
We present a differentiable predictive control (DPC) methodology for learning constrained control laws for unknown nonlinear systems.
DPC poses an approximate solution to multiparametric programming problems emerging from explicit nonlinear model predictive control (MPC)
arXiv Detail & Related papers (2021-07-25T16:47:57Z) - Gaussian Process-based Min-norm Stabilizing Controller for
Control-Affine Systems with Uncertain Input Effects and Dynamics [90.81186513537777]
We propose a novel compound kernel that captures the control-affine nature of the problem.
We show that this resulting optimization problem is convex, and we call it Gaussian Process-based Control Lyapunov Function Second-Order Cone Program (GP-CLF-SOCP)
arXiv Detail & Related papers (2020-11-14T01:27:32Z) - Reinforcement Learning for Low-Thrust Trajectory Design of
Interplanetary Missions [77.34726150561087]
This paper investigates the use of reinforcement learning for the robust design of interplanetary trajectories in presence of severe disturbances.
An open-source implementation of the state-of-the-art algorithm Proximal Policy Optimization is adopted.
The resulting Guidance and Control Network provides both a robust nominal trajectory and the associated closed-loop guidance law.
arXiv Detail & Related papers (2020-08-19T15:22:15Z) - Reinforcement Learning for Safety-Critical Control under Model
Uncertainty, using Control Lyapunov Functions and Control Barrier Functions [96.63967125746747]
Reinforcement learning framework learns the model uncertainty present in the CBF and CLF constraints.
RL-CBF-CLF-QP addresses the problem of model uncertainty in the safety constraints.
arXiv Detail & Related papers (2020-04-16T10:51:33Z) - Adaptive Control and Regret Minimization in Linear Quadratic Gaussian
(LQG) Setting [91.43582419264763]
We propose LqgOpt, a novel reinforcement learning algorithm based on the principle of optimism in the face of uncertainty.
LqgOpt efficiently explores the system dynamics, estimates the model parameters up to their confidence interval, and deploys the controller of the most optimistic model.
arXiv Detail & Related papers (2020-03-12T19:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.