Learning Event-triggered Control from Data through Joint Optimization
- URL: http://arxiv.org/abs/2008.04712v4
- Date: Fri, 23 Apr 2021 07:19:10 GMT
- Title: Learning Event-triggered Control from Data through Joint Optimization
- Authors: Niklas Funk, Dominik Baumann, Vincent Berenz, Sebastian Trimpe
- Abstract summary: We present a framework for model-free learning of event-triggered control strategies.
We propose a novel algorithm based on hierarchical reinforcement learning.
The resulting algorithm is shown to accomplish high-performance control in line with resource savings and scales seamlessly to nonlinear and high-dimensional systems.
- Score: 7.391641422048646
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a framework for model-free learning of event-triggered control
strategies. Event-triggered methods aim to achieve high control performance
while only closing the feedback loop when needed. This enables resource
savings, e.g., network bandwidth if control commands are sent via communication
networks, as in networked control systems. Event-triggered controllers consist
of a communication policy, determining when to communicate, and a control
policy, deciding what to communicate. It is essential to jointly optimize the
two policies since individual optimization does not necessarily yield the
overall optimal solution. To address this need for joint optimization, we
propose a novel algorithm based on hierarchical reinforcement learning. The
resulting algorithm is shown to accomplish high-performance control in line
with resource savings and scales seamlessly to nonlinear and high-dimensional
systems. The method's applicability to real-world scenarios is demonstrated
through experiments on a six degrees of freedom real-time controlled
manipulator. Further, we propose an approach towards evaluating the stability
of the learned neural network policies.
Related papers
- Differentiable Discrete Event Simulation for Queuing Network Control [7.965453961211742]
Queueing network control poses distinct challenges, including highity, large state and action spaces, and lack of stability.
We propose a scalable framework for policy optimization based on differentiable discrete event simulation.
Our methods can flexibly handle realistic scenarios, including systems operating in non-stationary environments.
arXiv Detail & Related papers (2024-09-05T17:53:54Z) - Resource Optimization for Tail-Based Control in Wireless Networked Control Systems [31.144888314890597]
Achieving control stability is one of the key design challenges of scalable Wireless Networked Control Systems.
This paper explores the use of an alternative control concept defined as tail-based control, which extends the classical Linear Quadratic Regulator (LQR) cost function for multiple dynamic control systems over a shared wireless network.
arXiv Detail & Related papers (2024-06-20T13:27:44Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Offline Supervised Learning V.S. Online Direct Policy Optimization: A Comparative Study and A Unified Training Paradigm for Neural Network-Based Optimal Feedback Control [7.242569453287703]
We first conduct a comparative study of two prevalent approaches: offline supervised learning and online direct policy optimization.
Our results underscore the superiority of offline supervised learning in terms of both optimality and training time.
We propose the Pre-train and Fine-tune strategy as a unified training paradigm for optimal feedback control.
arXiv Detail & Related papers (2022-11-29T05:07:13Z) - Deep Learning for Wireless Networked Systems: a joint
Estimation-Control-Scheduling Approach [47.29474858956844]
Wireless networked control system (WNCS) connecting sensors, controllers, and actuators via wireless communications is a key enabling technology for highly scalable and low-cost deployment of control systems in the Industry 4.0 era.
Despite the tight interaction of control and communications in WNCSs, most existing works adopt separative design approaches.
We propose a novel deep reinforcement learning (DRL)-based algorithm for controller and optimization utilizing both model-free and model-based data.
arXiv Detail & Related papers (2022-10-03T01:29:40Z) - Age of Semantics in Cooperative Communications: To Expedite Simulation
Towards Real via Offline Reinforcement Learning [53.18060442931179]
We propose the age of semantics (AoS) for measuring semantics freshness of status updates in a cooperative relay communication system.
We derive an online deep actor-critic (DAC) learning scheme under the on-policy temporal difference learning framework.
We then put forward a novel offline DAC scheme, which estimates the optimal control policy from a previously collected dataset.
arXiv Detail & Related papers (2022-09-19T11:55:28Z) - Learning Optimal Antenna Tilt Control Policies: A Contextual Linear
Bandit Approach [65.27783264330711]
Controlling antenna tilts in cellular networks is imperative to reach an efficient trade-off between network coverage and capacity.
We devise algorithms learning optimal tilt control policies from existing data.
We show that they can produce optimal tilt update policy using much fewer data samples than naive or existing rule-based learning algorithms.
arXiv Detail & Related papers (2022-01-06T18:24:30Z) - Better than the Best: Gradient-based Improper Reinforcement Learning for
Network Scheduling [60.48359567964899]
We consider the problem of scheduling in constrained queueing networks with a view to minimizing packet delay.
We use a policy gradient based reinforcement learning algorithm that produces a scheduler that performs better than the available atomic policies.
arXiv Detail & Related papers (2021-05-01T10:18:34Z) - Deep reinforcement learning of event-triggered communication and control
for multi-agent cooperative transport [9.891241465396098]
We explore a multi-agent reinforcement learning approach to address the design problem of communication and control strategies for cooperative transport.
Our framework exploits event-triggered architecture, namely, a feedback controller that computes the communication input and a triggering mechanism that determines when the input has to be updated again.
arXiv Detail & Related papers (2021-03-29T01:16:12Z) - Guided Constrained Policy Optimization for Dynamic Quadrupedal Robot
Locomotion [78.46388769788405]
We introduce guided constrained policy optimization (GCPO), an RL framework based upon our implementation of constrained policy optimization (CPPO)
We show that guided constrained RL offers faster convergence close to the desired optimum resulting in an optimal, yet physically feasible, robotic control behavior without the need for precise reward function tuning.
arXiv Detail & Related papers (2020-02-22T10:15:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.