Deep Learning-Based Synchronization for Uplink NB-IoT
- URL: http://arxiv.org/abs/2205.10805v1
- Date: Sun, 22 May 2022 12:16:43 GMT
- Title: Deep Learning-Based Synchronization for Uplink NB-IoT
- Authors: Fay\c{c}al A\"it Aoudia and Jakob Hoydis and Sebastian Cammerer and
Matthijs Van Keirsbilck and Alexander Keller
- Abstract summary: We propose a neural network (NN)-based algorithm for device detection and time of arrival (ToA) estimation for the narrowband physical random-access channel (NPRACH) of narrowband internet of things (NB-IoT)
The introduced NN architecture leverages residual convolutional networks as well as knowledge of the preamble structure of the 5G New Radio (5G NR) specifications.
- Score: 72.86843435313048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a neural network (NN)-based algorithm for device detection and
time of arrival (ToA) and carrier frequency offset (CFO) estimation for the
narrowband physical random-access channel (NPRACH) of narrowband internet of
things (NB-IoT). The introduced NN architecture leverages residual
convolutional networks as well as knowledge of the preamble structure of the 5G
New Radio (5G NR) specifications. Benchmarking on a 3rd Generation Partnership
Project (3GPP) urban microcell (UMi) channel model with random drops of users
against a state-of-the-art baseline shows that the proposed method enables up
to 8 dB gains in false negative rate (FNR) as well as significant gains in
false positive rate (FPR) and ToA and CFO estimation accuracy. Moreover, our
simulations indicate that the proposed algorithm enables gains over a wide
range of channel conditions, CFOs, and transmission probabilities. The
introduced synchronization method operates at the base station (BS) and,
therefore, introduces no additional complexity on the user devices. It could
lead to an extension of battery lifetime by reducing the preamble length or the
transmit power.
Related papers
- Satellite Federated Edge Learning: Architecture Design and Convergence Analysis [47.057886812985984]
This paper introduces a novel FEEL algorithm, named FEDMEGA, tailored to mega-constellation networks.
By integrating inter-satellite links (ISL) for intra-orbit model aggregation, the proposed algorithm significantly reduces the usage of low data rate and intermittent GSL.
Our proposed method includes a ring all-reduce based intra-orbit aggregation mechanism, coupled with a network flow-based transmission scheme for global model aggregation.
arXiv Detail & Related papers (2024-04-02T11:59:58Z) - Power-Efficient Indoor Localization Using Adaptive Channel-aware
Ultra-wideband DL-TDOA [7.306334571814026]
We propose and implement a novel low-power channel-aware dynamic frequency DL-TDOA ranging algorithm.
It comprises NLOS probability predictor based on a convolutional neural network (CNN), a dynamic ranging frequency control module, and an IMU sensor-based ranging filter.
arXiv Detail & Related papers (2024-02-16T09:04:04Z) - Properties and Potential Applications of Random Functional-Linked Types
of Neural Networks [81.56822938033119]
Random functional-linked neural networks (RFLNNs) offer an alternative way of learning in deep structure.
This paper gives some insights into the properties of RFLNNs from the viewpoints of frequency domain.
We propose a method to generate a BLS network with better performance, and design an efficient algorithm for solving Poison's equation.
arXiv Detail & Related papers (2023-04-03T13:25:22Z) - PRONTO: Preamble Overhead Reduction with Neural Networks for Coarse
Synchronization [1.242591017155152]
In IEEE 802.11 WiFi-based waveforms, the receiver performs coarse time and frequency synchronization using the first field of the preamble known as the legacy short training field (L-STF)
With the goal of reducing communication overhead, we propose a modified waveform, where the preamble length is reduced by eliminating the L-STF.
To decode this modified waveform, we propose a machine learning (ML)-based scheme called PRONTO that performs coarse time and frequency estimations.
arXiv Detail & Related papers (2021-12-20T22:18:28Z) - Waveform Learning for Next-Generation Wireless Communication Systems [16.26230847183709]
We propose a learning-based method for the joint design of a transmit and receive filter, the constellation geometry and associated bit labeling, as well as a neural network (NN)-based detector.
The method maximizes an achievable information rate, while simultaneously satisfying constraints on the adjacent channel leakage ratio (ACLR) and peak-to-average power ratio (PAPR)
arXiv Detail & Related papers (2021-09-02T14:51:16Z) - Learning to Estimate RIS-Aided mmWave Channels [50.15279409856091]
We focus on uplink cascaded channel estimation, where known and fixed base station combining and RIS phase control matrices are considered for collecting observations.
To boost the estimation performance and reduce the training overhead, the inherent channel sparsity of mmWave channels is leveraged in the deep unfolding method.
It is verified that the proposed deep unfolding network architecture can outperform the least squares (LS) method with a relatively smaller training overhead and online computational complexity.
arXiv Detail & Related papers (2021-07-27T06:57:56Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Energy-Efficient Model Compression and Splitting for Collaborative
Inference Over Time-Varying Channels [52.60092598312894]
We propose a technique to reduce the total energy bill at the edge device by utilizing model compression and time-varying model split between the edge and remote nodes.
Our proposed solution results in minimal energy consumption and $CO$ emission compared to the considered baselines.
arXiv Detail & Related papers (2021-06-02T07:36:27Z) - Two-step Machine Learning Approach for Channel Estimation with Mixed
Resolution RF Chains [19.0581196881206]
We propose an efficient uplink channel estimator by applying machine learning (ML) algorithms.
In a first step a conditional generative adversarial network (cGAN) predicts the radio channels from a limited set of full resolution RF chains to the rest of the low resolution RF chain antenna elements.
A long-short term memory (LSTM) neural network extracts further phase information from the low resolution RF chain antenna elements.
arXiv Detail & Related papers (2021-01-24T12:33:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.