Transfer of Reinforcement Learning-Based Controllers from Model- to
Hardware-in-the-Loop
- URL: http://arxiv.org/abs/2310.17671v1
- Date: Wed, 25 Oct 2023 09:13:12 GMT
- Title: Transfer of Reinforcement Learning-Based Controllers from Model- to
Hardware-in-the-Loop
- Authors: Mario Picerno, Lucas Koch, Kevin Badalian, Marius Wegener, Joschka
Schaub, Charles Robert Koch, and Jakob Andert
- Abstract summary: Reinforcement Learning has great potential for autonomously training agents to perform complex control tasks.
To use RL effectively in embedded system function development, the generated agents must be able to handle real-world applications.
This work focuses on accelerating the training process of RL agents by combining Transfer Learning (TL) and X-in-the-Loop (XiL) simulation.
- Score: 1.8218298349840023
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The process of developing control functions for embedded systems is
resource-, time-, and data-intensive, often resulting in sub-optimal cost and
solutions approaches. Reinforcement Learning (RL) has great potential for
autonomously training agents to perform complex control tasks with minimal
human intervention. Due to costly data generation and safety constraints,
however, its application is mostly limited to purely simulated domains. To use
RL effectively in embedded system function development, the generated agents
must be able to handle real-world applications. In this context, this work
focuses on accelerating the training process of RL agents by combining Transfer
Learning (TL) and X-in-the-Loop (XiL) simulation. For the use case of transient
exhaust gas re-circulation control for an internal combustion engine, use of a
computationally cheap Model-in-the-Loop (MiL) simulation is made to select a
suitable algorithm, fine-tune hyperparameters, and finally train candidate
agents for the transfer. These pre-trained RL agents are then fine-tuned in a
Hardware-in-the-Loop (HiL) system via TL. The transfer revealed the need for
adjusting the reward parameters when advancing to real hardware. Further, the
comparison between a purely HiL-trained and a transferred agent showed a
reduction of training time by a factor of 5.9. The results emphasize the
necessity to train RL agents with real hardware, and demonstrate that the
maturity of the transferred policies affects both training time and
performance, highlighting the strong synergies between TL and XiL simulation.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.