A Learning Approach for Joint Design of Event-triggered Control and
Power-Efficient Resource Allocation
- URL: http://arxiv.org/abs/2205.07070v1
- Date: Sat, 14 May 2022 14:16:11 GMT
- Title: A Learning Approach for Joint Design of Event-triggered Control and
Power-Efficient Resource Allocation
- Authors: Atefeh Termehchi, Mehdi Rasti
- Abstract summary: We study the joint design problem of an event-triggered control and an energy-efficient resource allocation in a fifth generation (5G) wireless network.
We propose a model-free hierarchical reinforcement learning approach that learns four policies simultaneously.
Our simulation results show that the proposed approach can properly control a simulated ICPS and significantly decrease the number of updates on the actuators' input as well as the downlink power consumption.
- Score: 3.822543555265593
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In emerging Industrial Cyber-Physical Systems (ICPSs), the joint design of
communication and control sub-systems is essential, as these sub-systems are
interconnected. In this paper, we study the joint design problem of an
event-triggered control and an energy-efficient resource allocation in a fifth
generation (5G) wireless network. We formally state the problem as a
multi-objective optimization one, aiming to minimize the number of updates on
the actuators' input and the power consumption in the downlink transmission. To
address the problem, we propose a model-free hierarchical reinforcement
learning approach \textcolor{blue}{with uniformly ultimate boundedness
stability guarantee} that learns four policies simultaneously. These policies
contain an update time policy on the actuators' input, a control policy, and
energy-efficient sub-carrier and power allocation policies. Our simulation
results show that the proposed approach can properly control a simulated ICPS
and significantly decrease the number of updates on the actuators' input as
well as the downlink power consumption.
Related papers
- AI-in-the-Loop Sensing and Communication Joint Design for Edge Intelligence [65.29835430845893]
We propose a framework that enhances edge intelligence through AI-in-the-loop joint sensing and communication.
A key contribution of our work is establishing an explicit relationship between validation loss and the system's tunable parameters.
We show that our framework reduces communication energy consumption by up to 77 percent and sensing costs measured by the number of samples by up to 52 percent.
arXiv Detail & Related papers (2025-02-14T14:56:58Z) - Communication-Control Codesign for Large-Scale Wireless Networked Control Systems [80.30532872347668]
Wireless Networked Control Systems (WNCSs) are essential to Industry 4.0, enabling flexible control in applications, such as drone swarms and autonomous robots.
We propose a practical WNCS model that captures correlated dynamics among multiple control loops with spatially distributed sensors and actuators sharing limited wireless resources over multi-state Markov block-fading channels.
We develop a Deep Reinforcement Learning (DRL) algorithm that efficiently handles the hybrid action space, captures communication-control correlations, and ensures robust training despite sparse cross-domain variables and floating control inputs.
arXiv Detail & Related papers (2024-10-15T06:28:21Z) - Integrating DeepRL with Robust Low-Level Control in Robotic Manipulators for Non-Repetitive Reaching Tasks [0.24578723416255746]
In robotics, contemporary strategies are learning-based, characterized by a complex black-box nature and a lack of interpretability.
We propose integrating a collision-free trajectory planner based on deep reinforcement learning (DRL) with a novel auto-tuning low-level control strategy.
arXiv Detail & Related papers (2024-02-04T15:54:03Z) - Distributed-Training-and-Execution Multi-Agent Reinforcement Learning
for Power Control in HetNet [48.96004919910818]
We propose a multi-agent deep reinforcement learning (MADRL) based power control scheme for the HetNet.
To promote cooperation among agents, we develop a penalty-based Q learning (PQL) algorithm for MADRL systems.
In this way, an agent's policy can be learned by other agents more easily, resulting in a more efficient collaboration process.
arXiv Detail & Related papers (2022-12-15T17:01:56Z) - Deep Learning for Wireless Networked Systems: a joint
Estimation-Control-Scheduling Approach [47.29474858956844]
Wireless networked control system (WNCS) connecting sensors, controllers, and actuators via wireless communications is a key enabling technology for highly scalable and low-cost deployment of control systems in the Industry 4.0 era.
Despite the tight interaction of control and communications in WNCSs, most existing works adopt separative design approaches.
We propose a novel deep reinforcement learning (DRL)-based algorithm for controller and optimization utilizing both model-free and model-based data.
arXiv Detail & Related papers (2022-10-03T01:29:40Z) - Active Distribution System Coordinated Control Method via Artificial
Intelligence [0.0]
It is necessary to control the system to provide power reliably and securely under normal voltages and frequency.
We suggest that neural networks with self-attention mechanisms have the potential to aid in the optimization of the system.
arXiv Detail & Related papers (2022-07-12T13:46:38Z) - Stabilizing Voltage in Power Distribution Networks via Multi-Agent
Reinforcement Learning with Transformer [128.19212716007794]
We propose a Transformer-based Multi-Agent Actor-Critic framework (T-MAAC) to stabilize voltage in power distribution networks.
In addition, we adopt a novel auxiliary-task training process tailored to the voltage control task, which improves the sample efficiency.
arXiv Detail & Related papers (2022-06-08T07:48:42Z) - Deep Reinforcement Learning Based Multidimensional Resource Management
for Energy Harvesting Cognitive NOMA Communications [64.1076645382049]
Combination of energy harvesting (EH), cognitive radio (CR), and non-orthogonal multiple access (NOMA) is a promising solution to improve energy efficiency.
In this paper, we study the spectrum, energy, and time resource management for deterministic-CR-NOMA IoT systems.
arXiv Detail & Related papers (2021-09-17T08:55:48Z) - Scheduling and Power Control for Wireless Multicast Systems via Deep
Reinforcement Learning [33.737301955006345]
Multicasting in wireless systems is a way to exploit the redundancy in user requests in a Content Centric Network.
Power control and optimal scheduling can significantly improve the wireless multicast network's performance under fading.
We show that power control policy can be learnt for reasonably large systems via this approach.
arXiv Detail & Related papers (2020-09-27T15:59:44Z) - Learning Event-triggered Control from Data through Joint Optimization [7.391641422048646]
We present a framework for model-free learning of event-triggered control strategies.
We propose a novel algorithm based on hierarchical reinforcement learning.
The resulting algorithm is shown to accomplish high-performance control in line with resource savings and scales seamlessly to nonlinear and high-dimensional systems.
arXiv Detail & Related papers (2020-08-11T14:15:38Z) - Online Reinforcement Learning Control by Direct Heuristic Dynamic
Programming: from Time-Driven to Event-Driven [80.94390916562179]
Time-driven learning refers to the machine learning method that updates parameters in a prediction model continuously as new data arrives.
It is desirable to prevent the time-driven dHDP from updating due to insignificant system event such as noise.
We show how the event-driven dHDP algorithm works in comparison to the original time-driven dHDP.
arXiv Detail & Related papers (2020-06-16T05:51:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.