A Learning-Based Fast Uplink Grant for Massive IoT via Support Vector
Machines and Long Short-Term Memory
- URL: http://arxiv.org/abs/2108.10070v1
- Date: Mon, 2 Aug 2021 11:33:02 GMT
- Title: A Learning-Based Fast Uplink Grant for Massive IoT via Support Vector
Machines and Long Short-Term Memory
- Authors: Eslam Eldeeb, Mohammad Shehab, and Hirley Alves
- Abstract summary: 3IoT introduced the need to use fast uplink grant (FUG) allocation in order to reduce latency and increase reliability for smart internet-of-things (mMTC) applications.
We propose a novel FUG allocation based on support machine scheduler (SVM)
Second, LSTM architecture is used for traffic prediction and correction techniques to overcome prediction errors.
- Score: 8.864453148536057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The current random access (RA) allocation techniques suffer from congestion
and high signaling overhead while serving massive machine type communication
(mMTC) applications. To this end, 3GPP introduced the need to use fast uplink
grant (FUG) allocation in order to reduce latency and increase reliability for
smart internet-of-things (IoT) applications with strict QoS constraints. We
propose a novel FUG allocation based on support vector machine (SVM), First,
MTC devices are prioritized using SVM classifier. Second, LSTM architecture is
used for traffic prediction and correction techniques to overcome prediction
errors. Both results are used to achieve an efficient resource scheduler in
terms of the average latency and total throughput. A Coupled Markov Modulated
Poisson Process (CMMPP) traffic model with mixed alarm and regular traffic is
applied to compare the proposed FUG allocation to other existing allocation
techniques. In addition, an extended traffic model based CMMPP is used to
evaluate the proposed algorithm in a more dense network. We test the proposed
scheme using real-time measurement data collected from the Numenta Anomaly
Benchmark (NAB) database. Our simulation results show the proposed model
outperforms the existing RA allocation schemes by achieving the highest
throughput and the lowest access delay of the order of 1 ms by achieving
prediction accuracy of 98 $\%$ when serving the target massive and critical MTC
applications with a limited number of resources.
Related papers
- HiRE: High Recall Approximate Top-$k$ Estimation for Efficient LLM
Inference [68.59839755875252]
HiRE comprises of two novel components: (i) a compression scheme to cheaply predict top-$k$ rows/columns with high recall, followed by full computation restricted to the predicted subset, and (ii) DA-TOP-$k$: an efficient multi-device approximate top-$k$ operator.
We demonstrate that on a one billion parameter model, HiRE applied to both the softmax as well as feedforward layers, achieves almost matching pretraining and downstream accuracy, and speeds up inference latency by $1.47times$ on a single TPUv5e device.
arXiv Detail & Related papers (2024-02-14T18:04:36Z) - SIMPL: A Simple and Efficient Multi-agent Motion Prediction Baseline for
Autonomous Driving [27.776472262857045]
This paper presents a Simple and effIcient Motion Prediction baseLine (SIMPL) for autonomous vehicles.
We propose a compact and efficient global feature fusion module that performs directed message passing in a symmetric manner.
As a strong baseline, SIMPL exhibits highly competitive performance on Argoverse 1 & 2 motion forecasting benchmarks.
arXiv Detail & Related papers (2024-02-04T15:07:49Z) - Guaranteed Dynamic Scheduling of Ultra-Reliable Low-Latency Traffic via
Conformal Prediction [72.59079526765487]
The dynamic scheduling of ultra-reliable and low-latency traffic (URLLC) in the uplink can significantly enhance the efficiency of coexisting services.
The main challenge is posed by the uncertainty in the process of URLLC packet generation.
We introduce a novel scheduler for URLLC packets that provides formal guarantees on reliability and latency irrespective of the quality of the URLLC traffic predictor.
arXiv Detail & Related papers (2023-02-15T14:09:55Z) - Federated Learning for Energy-limited Wireless Networks: A Partial Model
Aggregation Approach [79.59560136273917]
limited communication resources, bandwidth and energy, and data heterogeneity across devices are main bottlenecks for federated learning (FL)
We first devise a novel FL framework with partial model aggregation (PMA)
The proposed PMA-FL improves 2.72% and 11.6% accuracy on two typical heterogeneous datasets.
arXiv Detail & Related papers (2022-04-20T19:09:52Z) - Collaborative Intelligent Reflecting Surface Networks with Multi-Agent
Reinforcement Learning [63.83425382922157]
Intelligent reflecting surface (IRS) is envisioned to be widely applied in future wireless networks.
In this paper, we investigate a multi-user communication system assisted by cooperative IRS devices with the capability of energy harvesting.
arXiv Detail & Related papers (2022-03-26T20:37:14Z) - AI-aided Traffic Control Scheme for M2M Communications in the Internet
of Vehicles [61.21359293642559]
The dynamics of traffic and the heterogeneous requirements of different IoV applications are not considered in most existing studies.
We consider a hybrid traffic control scheme and use proximal policy optimization (PPO) method to tackle it.
arXiv Detail & Related papers (2022-03-05T10:54:05Z) - Event-Driven Source Traffic Prediction in Machine-Type Communications
Using LSTM Networks [5.995091801910689]
Long Short-Term Memory (LSTM) based deep learning approach is proposed for event-driven source traffic prediction.
Our model outperforms existing baseline solutions in saving resources and accuracy with a margin of around 9%.
arXiv Detail & Related papers (2021-01-12T09:31:18Z) - Data-Driven Random Access Optimization in Multi-Cell IoT Networks with
NOMA [78.60275748518589]
Non-orthogonal multiple access (NOMA) is a key technology to enable massive machine type communications (mMTC) in 5G networks and beyond.
In this paper, NOMA is applied to improve the random access efficiency in high-density spatially-distributed multi-cell wireless IoT networks.
A novel formulation of random channel access management is proposed, in which the transmission probability of each IoT device is tuned to maximize the geometric mean of users' expected capacity.
arXiv Detail & Related papers (2021-01-02T15:21:08Z) - Fast Grant Learning-Based Approach for Machine Type Communications with
NOMA [19.975709017224585]
We propose a non-orthogonal multiple access (NOMA)-based communication framework that allows machine type devices (MTDs) to access the network while avoiding congestion.
The proposed technique is a 2-step mechanism that first employs fast uplink grant to schedule the devices without sending a request to the base station (BS)
arXiv Detail & Related papers (2020-08-31T21:14:21Z) - Machine Learning for Predictive Deployment of UAVs with Multiple Access [37.49465317156625]
In this paper, a machine learning deployment framework of unmanned aerial vehicles (UAVs) is studied.
Due to time-varying traffic distribution, a long short-term memory (LSTM) based prediction is introduced to predict the future cellular traffic.
The proposed method can reduce up to 24% of the total power consumption compared to the conventional method.
arXiv Detail & Related papers (2020-03-02T00:15:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.