Semantic-aware Transmission Scheduling: a Monotonicity-driven Deep
Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2305.13706v2
- Date: Thu, 21 Sep 2023 10:48:47 GMT
- Title: Semantic-aware Transmission Scheduling: a Monotonicity-driven Deep
Reinforcement Learning Approach
- Authors: Jiazheng Chen, Wanchun Liu, Daniel Quevedo, Yonghui Li and Branka
Vucetic
- Abstract summary: For cyber-physical systems in the 6G era, semantic communications are required to guarantee application-level performance.
In this paper, we first investigate the fundamental properties of the optimal semantic-aware scheduling policy.
We then develop advanced deep reinforcement learning (DRL) algorithms by leveraging the theoretical guidelines.
- Score: 39.681075180578986
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For cyber-physical systems in the 6G era, semantic communications connecting
distributed devices for dynamic control and remote state estimation are
required to guarantee application-level performance, not merely focus on
communication-centric performance. Semantics here is a measure of the
usefulness of information transmissions. Semantic-aware transmission scheduling
of a large system often involves a large decision-making space, and the optimal
policy cannot be obtained by existing algorithms effectively. In this paper, we
first investigate the fundamental properties of the optimal semantic-aware
scheduling policy and then develop advanced deep reinforcement learning (DRL)
algorithms by leveraging the theoretical guidelines. Our numerical results show
that the proposed algorithms can substantially reduce training time and enhance
training performance compared to benchmark algorithms.
Related papers
- Towards Secure and Efficient Data Scheduling for Vehicular Social Networks [6.52925077242833]
This paper introduces an innovative learning-based algorithm for scheduling data transmission within vehicular social networks.
The algorithm first uses a specifically constructed neural network to enhance data processing capabilities.
It incorporates a Q-learning paradigm during the data transmission phase to optimize the information exchange.
arXiv Detail & Related papers (2024-06-28T15:20:50Z) - Unsupervised Deep Unfolded PGD for Transmit Power Allocation in Wireless
Systems [0.6091702876917281]
We propose a simple low-complexity TPC algorithm based on the deep unfolding of the iterative projected gradient (PGD) algorithm into layers of a deep neural network and learning the step-size parameters.
Performance evaluation in dense device-to-device (D2D) communication scenarios showed that the proposed method can achieve better performance than the iterative algorithm with more than a factor of 2 lower number of iterations.
arXiv Detail & Related papers (2023-06-20T19:51:21Z) - Age of Semantics in Cooperative Communications: To Expedite Simulation
Towards Real via Offline Reinforcement Learning [53.18060442931179]
We propose the age of semantics (AoS) for measuring semantics freshness of status updates in a cooperative relay communication system.
We derive an online deep actor-critic (DAC) learning scheme under the on-policy temporal difference learning framework.
We then put forward a novel offline DAC scheme, which estimates the optimal control policy from a previously collected dataset.
arXiv Detail & Related papers (2022-09-19T11:55:28Z) - A Heuristically Assisted Deep Reinforcement Learning Approach for
Network Slice Placement [0.7885276250519428]
We introduce a hybrid placement solution based on Deep Reinforcement Learning (DRL) and a dedicated optimization based on the Power of Two Choices principle.
The proposed Heuristically-Assisted DRL (HA-DRL) allows to accelerate the learning process and gain in resource usage when compared against other state-of-the-art approaches.
arXiv Detail & Related papers (2021-05-14T10:04:17Z) - Better than the Best: Gradient-based Improper Reinforcement Learning for
Network Scheduling [60.48359567964899]
We consider the problem of scheduling in constrained queueing networks with a view to minimizing packet delay.
We use a policy gradient based reinforcement learning algorithm that produces a scheduler that performs better than the available atomic policies.
arXiv Detail & Related papers (2021-05-01T10:18:34Z) - Reinforcement Learning for Datacenter Congestion Control [50.225885814524304]
Successful congestion control algorithms can dramatically improve latency and overall network throughput.
Until today, no such learning-based algorithms have shown practical potential in this domain.
We devise an RL-based algorithm with the aim of generalizing to different configurations of real-world datacenter networks.
We show that this scheme outperforms alternative popular RL approaches, and generalizes to scenarios that were not seen during training.
arXiv Detail & Related papers (2021-02-18T13:49:28Z) - A Tutorial on Ultra-Reliable and Low-Latency Communications in 6G:
Integrating Domain Knowledge into Deep Learning [115.75967665222635]
Ultra-reliable and low-latency communications (URLLC) will be central for the development of various emerging mission-critical applications.
Deep learning algorithms have been considered as promising ways of developing enabling technologies for URLLC in future 6G networks.
This tutorial illustrates how domain knowledge can be integrated into different kinds of deep learning algorithms for URLLC.
arXiv Detail & Related papers (2020-09-13T14:53:01Z) - Adaptive Serverless Learning [114.36410688552579]
We propose a novel adaptive decentralized training approach, which can compute the learning rate from data dynamically.
Our theoretical results reveal that the proposed algorithm can achieve linear speedup with respect to the number of workers.
To reduce the communication-efficient overhead, we further propose a communication-efficient adaptive decentralized training approach.
arXiv Detail & Related papers (2020-08-24T13:23:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.