An Automatic Tuning MPC with Application to Ecological Cruise Control
- URL: http://arxiv.org/abs/2309.09358v1
- Date: Sun, 17 Sep 2023 19:49:47 GMT
- Title: An Automatic Tuning MPC with Application to Ecological Cruise Control
- Authors: Mohammad Abtahi, Mahdis Rabbani, and Shima Nazari
- Abstract summary: We show an approach for online automatic tuning of an MPC controller with an example application to an ecological cruise control system.
We solve the global fuel consumption minimization problem offline using dynamic programming and find the corresponding MPC cost function.
A neural network fitted to these offline results is used to generate the desired MPC cost function weight during online operation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model predictive control (MPC) is a powerful tool for planning and
controlling dynamical systems due to its capacity for handling constraints and
taking advantage of preview information. Nevertheless, MPC performance is
highly dependent on the choice of cost function tuning parameters. In this
work, we demonstrate an approach for online automatic tuning of an MPC
controller with an example application to an ecological cruise control system
that saves fuel by using a preview of road grade. We solve the global fuel
consumption minimization problem offline using dynamic programming and find the
corresponding MPC cost function by solving the inverse optimization problem. A
neural network fitted to these offline results is used to generate the desired
MPC cost function weight during online operation. The effectiveness of the
proposed approach is verified in simulation for different road geometries.
Related papers
- Parameter-Adaptive Approximate MPC: Tuning Neural-Network Controllers without Retraining [50.00291020618743]
This work introduces a novel, parameter-adaptive AMPC architecture capable of online tuning without recomputing large datasets and retraining.
We showcase the effectiveness of parameter-adaptive AMPC by controlling the swing-ups of two different real cartpole systems with a severely resource-constrained microcontroller (MCU)
Taken together, these contributions represent a marked step toward the practical application of AMPC in real-world systems.
arXiv Detail & Related papers (2024-04-08T20:02:19Z) - On Building Myopic MPC Policies using Supervised Learning [0.0]
This paper considers an alternative strategy, where supervised learning is used to learn the optimal value function offline instead of learning the optimal policy.
This can then be used as the cost-to-go function in a myopic MPC with a very short prediction horizon.
arXiv Detail & Related papers (2024-01-23T08:08:09Z) - MOTO: Offline Pre-training to Online Fine-tuning for Model-based Robot
Learning [52.101643259906915]
We study the problem of offline pre-training and online fine-tuning for reinforcement learning from high-dimensional observations.
Existing model-based offline RL methods are not suitable for offline-to-online fine-tuning in high-dimensional domains.
We propose an on-policy model-based method that can efficiently reuse prior data through model-based value expansion and policy regularization.
arXiv Detail & Related papers (2024-01-06T21:04:31Z) - Reinforcement Learning with Model Predictive Control for Highway Ramp Metering [14.389086937116582]
This work explores the synergy between model-based and learning-based strategies to enhance traffic flow management.
The control problem is formulated as an RL task by crafting a suitable stage cost function.
An MPC-based RL approach, which leverages the MPC optimal problem as a function approximation for the RL algorithm, is proposed to learn to efficiently control an on-ramp.
arXiv Detail & Related papers (2023-11-15T09:50:54Z) - Policy Search for Model Predictive Control with Application to Agile
Drone Flight [56.24908013905407]
We propose a policy-search-for-model-predictive-control framework for MPC.
Specifically, we formulate the MPC as a parameterized controller, where the hard-to-optimize decision variables are represented as high-level policies.
Experiments show that our controller achieves robust and real-time control performance in both simulation and the real world.
arXiv Detail & Related papers (2021-12-07T17:39:24Z) - Optimization of the Model Predictive Control Meta-Parameters Through
Reinforcement Learning [1.4069478981641936]
We propose a novel framework in which any parameter of the control algorithm can be jointly tuned using reinforcement learning(RL)
We demonstrate our framework on the inverted pendulum control task, reducing the total time of the control system by 36% while also improving the control performance by 18.4% over the best-performing MPC baseline.
arXiv Detail & Related papers (2021-11-07T18:33:22Z) - Non-stationary Online Learning with Memory and Non-stochastic Control [71.14503310914799]
We study the problem of Online Convex Optimization (OCO) with memory, which allows loss functions to depend on past decisions.
In this paper, we introduce dynamic policy regret as the performance measure to design algorithms robust to non-stationary environments.
We propose a novel algorithm for OCO with memory that provably enjoys an optimal dynamic policy regret in terms of time horizon, non-stationarity measure, and memory length.
arXiv Detail & Related papers (2021-02-07T09:45:15Z) - Blending MPC & Value Function Approximation for Efficient Reinforcement
Learning [42.429730406277315]
Model-Predictive Control (MPC) is a powerful tool for controlling complex, real-world systems.
We present a framework for improving on MPC with model-free reinforcement learning (RL)
We show that our approach can obtain performance comparable with MPC with access to true dynamics.
arXiv Detail & Related papers (2020-12-10T11:32:01Z) - A Learning-Based Tune-Free Control Framework for Large Scale Autonomous
Driving System Deployment [5.296964852594282]
The framework consists of three machine-learning-based procedures, which jointly automate the control parameter tuning for autonomous driving.
The paper shows an improvement in control performance with a significant increase in parameter tuning efficiency, in both simulation and road tests.
arXiv Detail & Related papers (2020-11-09T08:54:36Z) - Learning High-Level Policies for Model Predictive Control [54.00297896763184]
Model Predictive Control (MPC) provides robust solutions to robot control tasks.
We propose a self-supervised learning algorithm for learning a neural network high-level policy.
We show that our approach can handle situations that are difficult for standard MPC.
arXiv Detail & Related papers (2020-07-20T17:12:34Z) - Information Theoretic Model Predictive Q-Learning [64.74041985237105]
We present a novel theoretical connection between information theoretic MPC and entropy regularized RL.
We develop a Q-learning algorithm that can leverage biased models.
arXiv Detail & Related papers (2019-12-31T00:29:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.