Deep Reinforcement Learning for Wireless Scheduling in Distributed
Networked Control
- URL: http://arxiv.org/abs/2109.12562v1
- Date: Sun, 26 Sep 2021 11:27:12 GMT
- Title: Deep Reinforcement Learning for Wireless Scheduling in Distributed
Networked Control
- Authors: Wanchun Liu, Kang Huang, Daniel E. Quevedo, Branka Vucetic and Yonghui
Li
- Abstract summary: This work considers a fully distributed WNCS with distributed plants, sensors, actuators and a controller, sharing a limited number of frequency channels.
We formulate the optimal transmission scheduling problem into a decision process problem and develop a deep-reinforcement-learning algorithm for solving it.
- Score: 56.77877237894372
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the literature of transmission scheduling in wireless networked control
systems (WNCSs) over shared wireless resources, most research works have
focused on partially distributed settings, i.e., where either the controller
and actuator, or the sensor and controller are co-located. To overcome this
limitation, the present work considers a fully distributed WNCS with
distributed plants, sensors, actuators and a controller, sharing a limited
number of frequency channels. To overcome communication limitations, the
controller schedules the transmissions and generates sequential predictive
commands for control. Using elements of stochastic systems theory, we derive a
sufficient stability condition of the WNCS, which is stated in terms of both
the control and communication system parameters. Once the condition is
satisfied, there exists at least one stationary and deterministic scheduling
policy that can stabilize all plants of the WNCS. By analyzing and representing
the per-step cost function of the WNCS in terms of a finite-length countable
vector state, we formulate the optimal transmission scheduling problem into a
Markov decision process problem and develop a deep-reinforcement-learning-based
algorithm for solving it. Numerical results show that the proposed algorithm
significantly outperforms the benchmark policies.
Related papers
- Resource Optimization for Tail-Based Control in Wireless Networked Control Systems [31.144888314890597]
Achieving control stability is one of the key design challenges of scalable Wireless Networked Control Systems.
This paper explores the use of an alternative control concept defined as tail-based control, which extends the classical Linear Quadratic Regulator (LQR) cost function for multiple dynamic control systems over a shared wireless network.
arXiv Detail & Related papers (2024-06-20T13:27:44Z) - Learning Robust and Correct Controllers from Signal Temporal Logic
Specifications Using BarrierNet [5.809331819510702]
We exploit STL quantitative semantics to define a notion of robust satisfaction.
We construct a set of trainable High Order Control Barrier Functions (HOCBFs) enforcing the satisfaction of formulas in a fragment of STL.
We train the HOCBFs together with other neural network parameters to further improve the robustness of the controller.
arXiv Detail & Related papers (2023-04-12T21:12:15Z) - A Deep Reinforcement Learning Framework for Optimizing Congestion
Control in Data Centers [2.310582065745938]
Various congestion control protocols have been designed to achieve high performance in different network environments.
Modern online learning solutions that delegate the congestion control actions to a machine cannot properly converge in the stringent time scales of data centers.
We leverage multiagent reinforcement learning to design a system for dynamic tuning of congestion control parameters at end-hosts in a data center.
arXiv Detail & Related papers (2023-01-29T22:08:35Z) - Robust Control for Dynamical Systems With Non-Gaussian Noise via Formal
Abstractions [59.605246463200736]
We present a novel controller synthesis method that does not rely on any explicit representation of the noise distributions.
First, we abstract the continuous control system into a finite-state model that captures noise by probabilistic transitions between discrete states.
We use state-of-the-art verification techniques to provide guarantees on the interval Markov decision process and compute a controller for which these guarantees carry over to the original control system.
arXiv Detail & Related papers (2023-01-04T10:40:30Z) - Deep Learning for Wireless Networked Systems: a joint
Estimation-Control-Scheduling Approach [47.29474858956844]
Wireless networked control system (WNCS) connecting sensors, controllers, and actuators via wireless communications is a key enabling technology for highly scalable and low-cost deployment of control systems in the Industry 4.0 era.
Despite the tight interaction of control and communications in WNCSs, most existing works adopt separative design approaches.
We propose a novel deep reinforcement learning (DRL)-based algorithm for controller and optimization utilizing both model-free and model-based data.
arXiv Detail & Related papers (2022-10-03T01:29:40Z) - State-Augmented Learnable Algorithms for Resource Management in Wireless
Networks [124.89036526192268]
We propose a state-augmented algorithm for solving resource management problems in wireless networks.
We show that the proposed algorithm leads to feasible and near-optimal RRM decisions.
arXiv Detail & Related papers (2022-07-05T18:02:54Z) - Task-Oriented Sensing, Computation, and Communication Integration for
Multi-Device Edge AI [108.08079323459822]
This paper studies a new multi-intelligent edge artificial-latency (AI) system, which jointly exploits the AI model split inference and integrated sensing and communication (ISAC)
We measure the inference accuracy by adopting an approximate but tractable metric, namely discriminant gain.
arXiv Detail & Related papers (2022-07-03T06:57:07Z) - Stable Online Control of Linear Time-Varying Systems [49.41696101740271]
COCO-LQ is an efficient online control algorithm that guarantees input-to-state stability for a large class of LTV systems.
We empirically demonstrate the performance of COCO-LQ in both synthetic experiments and a power system frequency control example.
arXiv Detail & Related papers (2021-04-29T06:18:49Z) - Communication Topology Co-Design in Graph Recurrent Neural Network Based
Distributed Control [4.492630871726495]
We introduce a compact but expressive graph recurrent neural network (GRNN) parameterization of distributed controllers.
Our proposed parameterization enjoys a local and distributed architecture, similar to previous Graph Neural Network (GNN)-based parameterizations.
We show that our method allows for performance/communication density tradeoff curves to be efficiently approximated.
arXiv Detail & Related papers (2021-04-28T16:30:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.