Model-Free Predictive Control: Introductory Algebraic Calculations, and a Comparison with HEOL and ANNs
- URL: http://arxiv.org/abs/2502.00443v1
- Date: Sat, 01 Feb 2025 14:23:34 GMT
- Title: Model-Free Predictive Control: Introductory Algebraic Calculations, and a Comparison with HEOL and ANNs
- Authors: Cédric Join, Emmanuel Delaleau, Michel Fliess,
- Abstract summary: Model-free predictive control is reformulated here via a linear differential equation with constant coefficients.
It is replacing Dynamic Programming, the Hamilton-Jacobi-Bellman equation, and Pontryagin's Maximum Principle.
A recent identification of the two tank system via a complex ANN architecture might indicate that a full modeling and the corresponding machine learning mechanism are not always necessary neither in control, nor, more generally, in AI.
- Score: 0.1474723404975345
- License:
- Abstract: Model predictive control (MPC) is a popular control engineering practice, but requires a sound knowledge of the model. Model-free predictive control (MFPC), a burning issue today, also related to reinforcement learning (RL) in AI, is reformulated here via a linear differential equation with constant coefficients, thanks to a new perspective on optimal control combined with recent advances in the field of model-free control. It is replacing Dynamic Programming, the Hamilton-Jacobi-Bellman equation, and Pontryagin's Maximum Principle. The computing burden is low. The implementation is straightforward. Two nonlinear examples, a chemical reactor and a two tank system, are illustrating our approach. A comparison with the HEOL setting, where some expertise of the process model is needed, shows only a slight superiority of the later. A recent identification of the two tank system via a complex ANN architecture might indicate that a full modeling and the corresponding machine learning mechanism are not always necessary neither in control, nor, more generally, in AI.
Related papers
- Data-driven Nonlinear Model Reduction using Koopman Theory: Integrated
Control Form and NMPC Case Study [56.283944756315066]
We propose generic model structures combining delay-coordinate encoding of measurements and full-state decoding to integrate reduced Koopman modeling and state estimation.
A case study demonstrates that our approach provides accurate control models and enables real-time capable nonlinear model predictive control of a high-purity cryogenic distillation column.
arXiv Detail & Related papers (2024-01-09T11:54:54Z) - AI Enhanced Control Engineering Methods [66.08455276899578]
We explore how AI tools can be useful in control applications.
Two immediate applications are linearization of system dynamics for local stability analysis or for state estimation using Kalman filters.
In addition, we explore the use of machine learning models for global parameterizations of state vectors and control inputs in model predictive control applications.
arXiv Detail & Related papers (2023-06-08T20:31:14Z) - Sample-efficient Model-based Reinforcement Learning for Quantum Control [0.2999888908665658]
We propose a model-based reinforcement learning (RL) approach for noisy time-dependent gate optimization.
We show an order of magnitude advantage in the sample complexity of our method over standard model-free RL.
Our algorithm is well suited for controlling partially characterised one and two qubit systems.
arXiv Detail & Related papers (2023-04-19T15:05:19Z) - Learning Residual Model of Model Predictive Control via Random Forests
for Autonomous Driving [13.865293598486492]
One major issue in predictive control (MPC) for autonomous driving is the contradiction between the system model's prediction and computation.
This paper reformulates the MPC tracking accuracy as a program (QP) problem optimization as a program (QP) can effectively solve it.
arXiv Detail & Related papers (2023-04-10T03:32:09Z) - Predictable MDP Abstraction for Unsupervised Model-Based RL [93.91375268580806]
We propose predictable MDP abstraction (PMA)
Instead of training a predictive model on the original MDP, we train a model on a transformed MDP with a learned action space.
We theoretically analyze PMA and empirically demonstrate that PMA leads to significant improvements over prior unsupervised model-based RL approaches.
arXiv Detail & Related papers (2023-02-08T07:37:51Z) - Oracle Inequalities for Model Selection in Offline Reinforcement
Learning [105.74139523696284]
We study the problem of model selection in offline RL with value function approximation.
We propose the first model selection algorithm for offline RL that achieves minimax rate-optimal inequalities up to logarithmic factors.
We conclude with several numerical simulations showing it is capable of reliably selecting a good model class.
arXiv Detail & Related papers (2022-11-03T17:32:34Z) - Comparative analysis of machine learning methods for active flow control [60.53767050487434]
Genetic Programming (GP) and Reinforcement Learning (RL) are gaining popularity in flow control.
This work presents a comparative analysis of the two, bench-marking some of their most representative algorithms against global optimization techniques.
arXiv Detail & Related papers (2022-02-23T18:11:19Z) - Physics-informed linear regression is a competitive approach compared to
Machine Learning methods in building MPC [0.8135412538980287]
We show that control in general leads to satisfactory reductions in heating and cooling energy compared to the building's baseline controller.
We also see that the physics-informed ARMAX models have a lower computational burden, and a superior sample efficiency compared to the Machine Learning based models.
arXiv Detail & Related papers (2021-10-29T16:56:05Z) - Bellman: A Toolbox for Model-Based Reinforcement Learning in TensorFlow [14.422129911404472]
Bellman aims to fill this gap and introduces the first thoroughly designed and tested model-based RL toolbox.
Our modular approach enables to combine a wide range of environment models with generic model-based agent classes that recover state-of-the-art algorithms.
arXiv Detail & Related papers (2021-03-26T11:32:27Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.