Reduced-Dimensional Reinforcement Learning Control using Singular
Perturbation Approximations
- URL: http://arxiv.org/abs/2004.14501v1
- Date: Wed, 29 Apr 2020 22:15:54 GMT
- Title: Reduced-Dimensional Reinforcement Learning Control using Singular
Perturbation Approximations
- Authors: Sayak Mukherjee, He Bai, Aranya Chakrabortty
- Abstract summary: We present a set of model-free, reduced-dimensional reinforcement learning based optimal control designs for linear time-invariant singularly perturbed (SP) systems.
We first present a state-feedback and output-feedback based RL control design for a generic SP system with unknown state and input matrices.
We extend both designs to clustered multi-agent consensus networks, where the SP property reflects through clustering.
- Score: 9.136645265350284
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a set of model-free, reduced-dimensional reinforcement learning
(RL) based optimal control designs for linear time-invariant singularly
perturbed (SP) systems. We first present a state-feedback and output-feedback
based RL control design for a generic SP system with unknown state and input
matrices. We take advantage of the underlying time-scale separation property of
the plant to learn a linear quadratic regulator (LQR) for only its slow
dynamics, thereby saving a significant amount of learning time compared to the
conventional full-dimensional RL controller. We analyze the sub-optimality of
the design using SP approximation theorems and provide sufficient conditions
for closed-loop stability. Thereafter, we extend both designs to clustered
multi-agent consensus networks, where the SP property reflects through
clustering. We develop both centralized and cluster-wise block-decentralized RL
controllers for such networks, in reduced dimensions. We demonstrate the
details of the implementation of these controllers using simulations of
relevant numerical examples and compare them with conventional RL designs to
show the computational benefits of our approach.
Related papers
- Communication-Control Codesign for Large-Scale Wireless Networked Control Systems [80.30532872347668]
Wireless Networked Control Systems (WNCSs) are essential to Industry 4.0, enabling flexible control in applications, such as drone swarms and autonomous robots.
We propose a practical WNCS model that captures correlated dynamics among multiple control loops with spatially distributed sensors and actuators sharing limited wireless resources over multi-state Markov block-fading channels.
We develop a Deep Reinforcement Learning (DRL) algorithm that efficiently handles the hybrid action space, captures communication-control correlations, and ensures robust training despite sparse cross-domain variables and floating control inputs.
arXiv Detail & Related papers (2024-10-15T06:28:21Z) - SimBa: Simplicity Bias for Scaling Up Parameters in Deep Reinforcement Learning [49.83621156017321]
SimBa is an architecture designed to scale up parameters in deep RL by injecting a simplicity bias.
By scaling up parameters with SimBa, the sample efficiency of various deep RL algorithms-including off-policy, on-policy, and unsupervised methods-is consistently improved.
arXiv Detail & Related papers (2024-10-13T07:20:53Z) - ReACT: Reinforcement Learning for Controller Parametrization using
B-Spline Geometries [0.0]
This work presents a novel approach using deep reinforcement learning (DRL) with N-dimensional B-spline geometries (BSGs)
We focus on the control of parameter-variant systems, a class of systems with complex behavior which depends on the operating conditions.
We make the adaptation process more efficient by introducing BSGs to map the controller parameters which may depend on numerous operating conditions.
arXiv Detail & Related papers (2024-01-10T16:27:30Z) - Sample-efficient Model-based Reinforcement Learning for Quantum Control [0.2999888908665658]
We propose a model-based reinforcement learning (RL) approach for noisy time-dependent gate optimization.
We show an order of magnitude advantage in the sample complexity of our method over standard model-free RL.
Our algorithm is well suited for controlling partially characterised one and two qubit systems.
arXiv Detail & Related papers (2023-04-19T15:05:19Z) - Structured Sparsity Learning for Efficient Video Super-Resolution [99.1632164448236]
We develop a structured pruning scheme called Structured Sparsity Learning (SSL) according to the properties of video super-resolution (VSR) models.
In SSL, we design pruning schemes for several key components in VSR models, including residual blocks, recurrent networks, and upsampling networks.
arXiv Detail & Related papers (2022-06-15T17:36:04Z) - Towards Standardizing Reinforcement Learning Approaches for Stochastic
Production Scheduling [77.34726150561087]
reinforcement learning can be used to solve scheduling problems.
Existing studies rely on (sometimes) complex simulations for which the code is unavailable.
There is a vast array of RL designs to choose from.
standardization of model descriptions - both production setup and RL design - and validation scheme are a prerequisite.
arXiv Detail & Related papers (2021-04-16T16:07:10Z) - Two-step reinforcement learning for model-free redesign of nonlinear
optimal regulator [1.5624421399300306]
Reinforcement learning (RL) is one of the promising approaches that enable model-free redesign of optimal controllers for nonlinear dynamical systems.
We propose a model-free two-step design approach that improves the transient learning performance of RL in an optimal regulator redesign problem for unknown nonlinear systems.
arXiv Detail & Related papers (2021-03-05T17:12:33Z) - Reinforcement Learning for Datacenter Congestion Control [50.225885814524304]
Successful congestion control algorithms can dramatically improve latency and overall network throughput.
Until today, no such learning-based algorithms have shown practical potential in this domain.
We devise an RL-based algorithm with the aim of generalizing to different configurations of real-world datacenter networks.
We show that this scheme outperforms alternative popular RL approaches, and generalizes to scenarios that were not seen during training.
arXiv Detail & Related papers (2021-02-18T13:49:28Z) - Reinforcement Learning of Structured Control for Linear Systems with
Unknown State Matrix [0.0]
We bring forth the ideas from reinforcement learning (RL) in conjunction with sufficient stability and performance guarantees.
A special control structure enabled by this RL framework is distributed learning control which is necessary for many large-scale cyber-physical systems.
arXiv Detail & Related papers (2020-11-02T17:04:34Z) - Decomposability and Parallel Computation of Multi-Agent LQR [19.710361049812608]
We propose a parallel RL scheme for a linear regulator (LQR) design in a continuous-time linear MAS.
We show that if the MAS is homogeneous then this decomposition retains closed-loop optimality.
The proposed approach can guarantee significant speed-up in learning without any loss in the cumulative value of the LQR cost.
arXiv Detail & Related papers (2020-10-16T20:15:39Z) - Adaptive Subcarrier, Parameter, and Power Allocation for Partitioned
Edge Learning Over Broadband Channels [69.18343801164741]
partitioned edge learning (PARTEL) implements parameter-server training, a well known distributed learning method, in wireless network.
We consider the case of deep neural network (DNN) models which can be trained using PARTEL by introducing some auxiliary variables.
arXiv Detail & Related papers (2020-10-08T15:27:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.