Aiding reinforcement learning for set point control
- URL: http://arxiv.org/abs/2304.10289v1
- Date: Thu, 20 Apr 2023 13:12:00 GMT
- Title: Aiding reinforcement learning for set point control
- Authors: Ruoqi Zhang, Per Mattsson, Torbj\"orn Wigren
- Abstract summary: The paper contributes by augmentation of reinforcement learning with a simple guiding feedback controller.
The proposed method is evaluated with simulation and on a real-world double tank process with promising results.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While reinforcement learning has made great improvements, state-of-the-art
algorithms can still struggle with seemingly simple set-point feedback control
problems. One reason for this is that the learned controller may not be able to
excite the system dynamics well enough initially, and therefore it can take a
long time to get data that is informative enough to learn for good control. The
paper contributes by augmentation of reinforcement learning with a simple
guiding feedback controller, for example, a proportional controller. The key
advantage in set point control is a much improved excitation that improves the
convergence properties of the reinforcement learning controller significantly.
This can be very important in real-world control where quick and accurate
convergence is needed. The proposed method is evaluated with simulation and on
a real-world double tank process with promising results.
Related papers
- ConcertoRL: An Innovative Time-Interleaved Reinforcement Learning Approach for Enhanced Control in Direct-Drive Tandem-Wing Vehicles [7.121362365269696]
We introduce the ConcertoRL algorithm to enhance control precision and stabilize the online training process.
Trials demonstrate a substantial performance boost of approximately 70% over scenarios without reinforcement learning enhancements.
Results highlight the algorithm's ability to create a synergistic effect that exceeds the sum of its parts.
arXiv Detail & Related papers (2024-05-22T13:53:10Z) - Growing Q-Networks: Solving Continuous Control Tasks with Adaptive Control Resolution [51.83951489847344]
In robotics applications, smooth control signals are commonly preferred to reduce system wear and energy efficiency.
In this work, we aim to bridge this performance gap by growing discrete action spaces from coarse to fine control resolution.
Our work indicates that an adaptive control resolution in combination with value decomposition yields simple critic-only algorithms that yield surprisingly strong performance on continuous control tasks.
arXiv Detail & Related papers (2024-04-05T17:58:37Z) - Robust nonlinear set-point control with reinforcement learning [0.0]
This paper argues that three ideas can improve reinforcement learning methods even for highly nonlinear set-point control problems.
The claim is supported by experiments with a real-world nonlinear cascaded tank process and a simulated strongly nonlinear pH-control system.
arXiv Detail & Related papers (2023-04-20T13:00:04Z) - Improving the Performance of Robust Control through Event-Triggered
Learning [74.57758188038375]
We propose an event-triggered learning algorithm that decides when to learn in the face of uncertainty in the LQR problem.
We demonstrate improved performance over a robust controller baseline in a numerical example.
arXiv Detail & Related papers (2022-07-28T17:36:37Z) - Regularizing Action Policies for Smooth Control with Reinforcement
Learning [47.312768123967025]
Conditioning for Action Policy Smoothness (CAPS) is an effective yet intuitive regularization on action policies.
CAPS offers consistent improvement in the smoothness of the learned state-to-action mappings of neural network controllers.
Tested on a real system, improvements in controller smoothness on a quadrotor drone resulted in an almost 80% reduction in power consumption.
arXiv Detail & Related papers (2020-12-11T21:35:24Z) - Optimizing Mixed Autonomy Traffic Flow With Decentralized Autonomous
Vehicles and Multi-Agent RL [63.52264764099532]
We study the ability of autonomous vehicles to improve the throughput of a bottleneck using a fully decentralized control scheme in a mixed autonomy setting.
We apply multi-agent reinforcement algorithms to this problem and demonstrate that significant improvements in bottleneck throughput, from 20% at a 5% penetration rate to 33% at a 40% penetration rate, can be achieved.
arXiv Detail & Related papers (2020-10-30T22:06:05Z) - Learning a Contact-Adaptive Controller for Robust, Efficient Legged
Locomotion [95.1825179206694]
We present a framework that synthesizes robust controllers for a quadruped robot.
A high-level controller learns to choose from a set of primitives in response to changes in the environment.
A low-level controller that utilizes an established control method to robustly execute the primitives.
arXiv Detail & Related papers (2020-09-21T16:49:26Z) - Anticipating the Long-Term Effect of Online Learning in Control [75.6527644813815]
AntLer is a design algorithm for learning-based control laws that anticipates learning.
We show that AntLer approximates an optimal solution arbitrarily accurately with probability one.
arXiv Detail & Related papers (2020-07-24T07:00:14Z) - Optimal PID and Antiwindup Control Design as a Reinforcement Learning
Problem [3.131740922192114]
We focus on the interpretability of DRL control methods.
In particular, we view linear fixed-structure controllers as shallow neural networks embedded in the actor-critic framework.
arXiv Detail & Related papers (2020-05-10T01:05:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.