Throughput and Latency in the Distributed Q-Learning Random Access mMTC
Networks
- URL: http://arxiv.org/abs/2111.00299v1
- Date: Sat, 30 Oct 2021 17:57:06 GMT
- Title: Throughput and Latency in the Distributed Q-Learning Random Access mMTC
Networks
- Authors: Giovanni Maciel Ferreira Silva, Taufik Abrao
- Abstract summary: In mMTC mode, with thousands of devices trying to access network resources sporadically, the problem of random access (RA) is crucial.
In this work, we propose a distributed packet-based learning method by varying the reward from the central node that favors devices having a larger number of remaining packets to transmit.
Our numerical results indicated that the proposed distributed packet-based Q-learning method attains a much better throughput-latency trade-off than the alternative independent and collaborative techniques.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In mMTC mode, with thousands of devices trying to access network resources
sporadically, the problem of random access (RA) and collisions between devices
that select the same resources becomes crucial. A promising approach to solve
such an RA problem is to use learning mechanisms, especially the Q-learning
algorithm, where the devices learn about the best time-slot periods to transmit
through rewards sent by the central node. In this work, we propose a
distributed packet-based learning method by varying the reward from the central
node that favors devices having a larger number of remaining packets to
transmit. Our numerical results indicated that the proposed distributed
packet-based Q-learning method attains a much better throughput-latency
trade-off than the alternative independent and collaborative techniques in
practical scenarios of interest. In contrast, the number of payload bits of the
packet-based technique is reduced regarding the collaborative Q-learning RA
technique for achieving the same normalized throughput.
Related papers
- Edge-device Collaborative Computing for Multi-view Classification [9.047284788663776]
We explore collaborative inference at the edge, in which edge nodes and end devices share correlated data and the inference computational burden.
We introduce selective schemes that decrease bandwidth resource consumption by effectively reducing data redundancy.
Experimental results highlight that selective collaborative schemes can achieve different trade-offs between the above performance metrics.
arXiv Detail & Related papers (2024-09-24T11:07:33Z) - Federated Learning over a Wireless Network: Distributed User Selection
through Random Access [23.544290667425532]
This study proposes a network intrinsic approach of distributed user selection.
We manipulate the contention window (CW) size to prioritize certain users for obtaining radio resources in each round of training.
Prioritization is based on the distance between the newly trained local model and the global model of the previous round.
arXiv Detail & Related papers (2023-07-07T02:14:46Z) - Multi-Agent Reinforcement Learning for Network Routing in Integrated
Access Backhaul Networks [0.0]
We aim to maximize packet arrival ratio while minimizing their latency in IAB networks.
To solve this problem, we develop a multi-agent partially observed Markov decision process (POMD)
We show that A2C outperforms other reinforcement learning algorithms, leading to increased network efficiency and reduced selfish agent behavior.
arXiv Detail & Related papers (2023-05-12T13:03:26Z) - Decentralized Learning over Wireless Networks: The Effect of Broadcast
with Random Access [56.91063444859008]
We investigate the impact of broadcast transmission and probabilistic random access policy on the convergence performance of D-SGD.
Our results demonstrate that optimizing the access probability to maximize the expected number of successful links is a highly effective strategy for accelerating the system convergence.
arXiv Detail & Related papers (2023-05-12T10:32:26Z) - Asynchronous Parallel Incremental Block-Coordinate Descent for
Decentralized Machine Learning [55.198301429316125]
Machine learning (ML) is a key technique for big-data-driven modelling and analysis of massive Internet of Things (IoT) based intelligent and ubiquitous computing.
For fast-increasing applications and data amounts, distributed learning is a promising emerging paradigm since it is often impractical or inefficient to share/aggregate data.
This paper studies the problem of training an ML model over decentralized systems, where data are distributed over many user devices.
arXiv Detail & Related papers (2022-02-07T15:04:15Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Distributed Learning in Wireless Networks: Recent Progress and Future
Challenges [170.35951727508225]
Next-generation wireless networks will enable many machine learning (ML) tools and applications to analyze various types of data collected by edge devices.
Distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges.
This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks.
arXiv Detail & Related papers (2021-04-05T20:57:56Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Multi-agent Reinforcement Learning for Resource Allocation in IoT
networks with Edge Computing [16.129649374251088]
It's challenging for end users to offload computation due to their massive requirements on spectrum and resources.
In this paper, we investigate offloading mechanism with resource allocation in IoT edge computing networks by formulating it as a game.
arXiv Detail & Related papers (2020-04-05T20:59:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.