Multi-Power Level $Q$-Learning Algorithm for Random Access in NOMA mMTC
Systems
- URL: http://arxiv.org/abs/2301.05196v1
- Date: Thu, 12 Jan 2023 18:31:00 GMT
- Title: Multi-Power Level $Q$-Learning Algorithm for Random Access in NOMA mMTC
Systems
- Authors: Giovanni Maciel Ferreira Silva, Taufik Abr\~ao
- Abstract summary: Machine-type communications (mMTC) will be part of new services planned to integrate the fifth generation of wireless communication (B5G)
Massive random access (RA) problem arises when two or more devices collide when selecting the same resource block.
We propose a multi-power level QL (MPL-QL) algorithm that uses non-orthogonal multiple access (NOMA) transmit scheme to generate transmission power diversity.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The massive machine-type communications (mMTC) service will be part of new
services planned to integrate the fifth generation of wireless communication
(B5G). In mMTC, thousands of devices sporadically access available resource
blocks on the network. In this scenario, the massive random access (RA) problem
arises when two or more devices collide when selecting the same resource block.
There are several techniques to deal with this problem. One of them deploys
$Q$-learning (QL), in which devices store in their $Q$-table the rewards sent
by the central node that indicate the quality of the transmission performed.
The device learns the best resource blocks to select and transmit to avoid
collisions. We propose a multi-power level QL (MPL-QL) algorithm that uses
non-orthogonal multiple access (NOMA) transmit scheme to generate transmission
power diversity and allow {accommodate} more than one device in the same
time-slot as long as the signal-to-interference-plus-noise ratio (SINR) exceeds
a threshold value. The numerical results reveal that the best
performance-complexity trade-off is obtained by using a {higher {number of}
power levels, typically eight levels}. The proposed MPL-QL {can deliver} better
throughput and lower latency compared to other recent QL-based algorithms found
in the literature
Related papers
- Distributed Inference and Fine-tuning of Large Language Models Over The
Internet [91.00270820533272]
Large language models (LLMs) are useful in many NLP tasks and become more capable with size.
These models require high-end hardware, making them inaccessible to most researchers.
We develop fault-tolerant inference algorithms and load-balancing protocols that automatically assign devices to maximize the total system throughput.
arXiv Detail & Related papers (2023-12-13T18:52:49Z) - Machine learning-based decentralized TDMA for VLC IoT networks [0.9208007322096532]
The proposed algorithm is based on Q-learning, a reinforcement learning algorithm.
The proposed algorithm converges quickly and provides collision-free decentralized TDMA for the network.
arXiv Detail & Related papers (2023-11-23T16:12:00Z) - Artificial Intelligence Empowered Multiple Access for Ultra Reliable and
Low Latency THz Wireless Networks [76.89730672544216]
Terahertz (THz) wireless networks are expected to catalyze the beyond fifth generation (B5G) era.
To satisfy the ultra-reliability and low-latency demands of several B5G applications, novel mobility management approaches are required.
This article presents a holistic MAC layer approach that enables intelligent user association and resource allocation, as well as flexible and adaptive mobility management.
arXiv Detail & Related papers (2022-08-17T03:00:24Z) - Federated Learning for Energy-limited Wireless Networks: A Partial Model
Aggregation Approach [79.59560136273917]
limited communication resources, bandwidth and energy, and data heterogeneity across devices are main bottlenecks for federated learning (FL)
We first devise a novel FL framework with partial model aggregation (PMA)
The proposed PMA-FL improves 2.72% and 11.6% accuracy on two typical heterogeneous datasets.
arXiv Detail & Related papers (2022-04-20T19:09:52Z) - Asynchronous Parallel Incremental Block-Coordinate Descent for
Decentralized Machine Learning [55.198301429316125]
Machine learning (ML) is a key technique for big-data-driven modelling and analysis of massive Internet of Things (IoT) based intelligent and ubiquitous computing.
For fast-increasing applications and data amounts, distributed learning is a promising emerging paradigm since it is often impractical or inefficient to share/aggregate data.
This paper studies the problem of training an ML model over decentralized systems, where data are distributed over many user devices.
arXiv Detail & Related papers (2022-02-07T15:04:15Z) - Throughput and Latency in the Distributed Q-Learning Random Access mMTC
Networks [0.0]
In mMTC mode, with thousands of devices trying to access network resources sporadically, the problem of random access (RA) is crucial.
In this work, we propose a distributed packet-based learning method by varying the reward from the central node that favors devices having a larger number of remaining packets to transmit.
Our numerical results indicated that the proposed distributed packet-based Q-learning method attains a much better throughput-latency trade-off than the alternative independent and collaborative techniques.
arXiv Detail & Related papers (2021-10-30T17:57:06Z) - A Learning-Based Fast Uplink Grant for Massive IoT via Support Vector
Machines and Long Short-Term Memory [8.864453148536057]
3IoT introduced the need to use fast uplink grant (FUG) allocation in order to reduce latency and increase reliability for smart internet-of-things (mMTC) applications.
We propose a novel FUG allocation based on support machine scheduler (SVM)
Second, LSTM architecture is used for traffic prediction and correction techniques to overcome prediction errors.
arXiv Detail & Related papers (2021-08-02T11:33:02Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Learning Centric Power Allocation for Edge Intelligence [84.16832516799289]
Edge intelligence has been proposed, which collects distributed data and performs machine learning at the edge.
This paper proposes a learning centric power allocation (LCPA) method, which allocates radio resources based on an empirical classification error model.
Experimental results show that the proposed LCPA algorithm significantly outperforms other power allocation algorithms.
arXiv Detail & Related papers (2020-07-21T07:02:07Z) - Scheduling Policy and Power Allocation for Federated Learning in NOMA
Based MEC [21.267954799102874]
Federated learning (FL) is a highly pursued machine learning technique that can train a model centrally while keeping data distributed.
We propose a new scheduling policy and power allocation scheme using non-orthogonal multiple access (NOMA) settings to maximize the weighted sum data rate.
Simulation results show that the proposed scheduling and power allocation scheme can help achieve a higher FL testing accuracy in NOMA based wireless networks.
arXiv Detail & Related papers (2020-06-21T23:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.