Deep Learning for Wireless Networked Systems: a joint
Estimation-Control-Scheduling Approach
- URL: http://arxiv.org/abs/2210.00673v1
- Date: Mon, 3 Oct 2022 01:29:40 GMT
- Title: Deep Learning for Wireless Networked Systems: a joint
Estimation-Control-Scheduling Approach
- Authors: Zihuai Zhao, Wanchun Liu, Daniel E. Quevedo, Yonghui Li and Branka
Vucetic
- Abstract summary: Wireless networked control system (WNCS) connecting sensors, controllers, and actuators via wireless communications is a key enabling technology for highly scalable and low-cost deployment of control systems in the Industry 4.0 era.
Despite the tight interaction of control and communications in WNCSs, most existing works adopt separative design approaches.
We propose a novel deep reinforcement learning (DRL)-based algorithm for controller and optimization utilizing both model-free and model-based data.
- Score: 47.29474858956844
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Wireless networked control system (WNCS) connecting sensors, controllers, and
actuators via wireless communications is a key enabling technology for highly
scalable and low-cost deployment of control systems in the Industry 4.0 era.
Despite the tight interaction of control and communications in WNCSs, most
existing works adopt separative design approaches. This is mainly because the
co-design of control-communication policies requires large and hybrid state and
action spaces, making the optimal problem mathematically intractable and
difficult to be solved effectively by classic algorithms. In this paper, we
systematically investigate deep learning (DL)-based estimator-control-scheduler
co-design for a model-unknown nonlinear WNCS over wireless fading channels. In
particular, we propose a co-design framework with the awareness of the sensor's
age-of-information (AoI) states and dynamic channel states. We propose a novel
deep reinforcement learning (DRL)-based algorithm for controller and scheduler
optimization utilizing both model-free and model-based data. An AoI-based
importance sampling algorithm that takes into account the data accuracy is
proposed for enhancing learning efficiency. We also develop novel schemes for
enhancing the stability of joint training. Extensive experiments demonstrate
that the proposed joint training algorithm can effectively solve the
estimation-control-scheduling co-design problem in various scenarios and
provide significant performance gain compared to separative design and some
benchmark policies.
Related papers
- Communication-Control Codesign for Large-Scale Wireless Networked Control Systems [80.30532872347668]
Wireless Networked Control Systems (WNCSs) are essential to Industry 4.0, enabling flexible control in applications, such as drone swarms and autonomous robots.
We propose a practical WNCS model that captures correlated dynamics among multiple control loops with spatially distributed sensors and actuators sharing limited wireless resources over multi-state Markov block-fading channels.
We develop a Deep Reinforcement Learning (DRL) algorithm that efficiently handles the hybrid action space, captures communication-control correlations, and ensures robust training despite sparse cross-domain variables and floating control inputs.
arXiv Detail & Related papers (2024-10-15T06:28:21Z) - Effective Communication with Dynamic Feature Compression [25.150266946722]
We study a prototypal system in which an observer must communicate its sensory data to a robot controlling a task.
We consider an ensemble Vector Quantized Variational Autoencoder (VQ-VAE) encoding, and train a Deep Reinforcement Learning (DRL) agent to dynamically adapt the quantization level.
We tested the proposed approach on the well-known CartPole reference control problem, obtaining a significant performance increase.
arXiv Detail & Related papers (2024-01-29T15:35:05Z) - Optimization Theory Based Deep Reinforcement Learning for Resource
Allocation in Ultra-Reliable Wireless Networked Control Systems [10.177917426690701]
This paper introduces a novel optimization theory based deep reinforcement learning (DRL) framework for the joint design of controller and communication systems.
The objective of minimum power consumption is targeted while satisfying the schedulability and rate constraints of the communication system.
arXiv Detail & Related papers (2023-11-28T15:49:29Z) - Causal Semantic Communication for Digital Twins: A Generalizable
Imitation Learning Approach [74.25870052841226]
A digital twin (DT) leverages a virtual representation of the physical world, along with communication (e.g., 6G), computing, and artificial intelligence (AI) technologies to enable many connected intelligence services.
Wireless systems can exploit the paradigm of semantic communication (SC) for facilitating informed decision-making under strict communication constraints.
A novel framework called causal semantic communication (CSC) is proposed for DT-based wireless systems.
arXiv Detail & Related papers (2023-04-25T00:15:00Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Semantic and Effective Communication for Remote Control Tasks with
Dynamic Feature Compression [23.36744348465991]
Coordination of robotic swarms and the remote wireless control of industrial systems are among the major use cases for 5G and beyond systems.
In this work, we consider a prototypal system in which an observer must communicate its sensory data to an actor controlling a task.
We propose an ensemble Vector Quantized Variational Autoencoder (VQ-VAE) encoding, and train a Deep Reinforcement Learning (DRL) agent to dynamically adapt the quantization level.
arXiv Detail & Related papers (2023-01-14T11:43:56Z) - Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles [52.79089414630366]
We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
arXiv Detail & Related papers (2020-07-27T12:38:37Z) - Reinforcement Learning Control of Robotic Knee with Human in the Loop by
Flexible Policy Iteration [17.365135977882215]
This study fills important voids by introducing innovative features to the policy algorithm.
We show system level performances including convergence of the approximate value function, (sub)optimality of the solution, and stability of the system.
arXiv Detail & Related papers (2020-06-16T09:09:48Z) - Optimization-driven Deep Reinforcement Learning for Robust Beamforming
in IRS-assisted Wireless Communications [54.610318402371185]
Intelligent reflecting surface (IRS) is a promising technology to assist downlink information transmissions from a multi-antenna access point (AP) to a receiver.
We minimize the AP's transmit power by a joint optimization of the AP's active beamforming and the IRS's passive beamforming.
We propose a deep reinforcement learning (DRL) approach that can adapt the beamforming strategies from past experiences.
arXiv Detail & Related papers (2020-05-25T01:42:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.