PRONTO: Preamble Overhead Reduction with Neural Networks for Coarse
Synchronization
- URL: http://arxiv.org/abs/2112.10885v1
- Date: Mon, 20 Dec 2021 22:18:28 GMT
- Title: PRONTO: Preamble Overhead Reduction with Neural Networks for Coarse
Synchronization
- Authors: Nasim Soltani, Debashri Roy, and Kaushik Chowdhury
- Abstract summary: In IEEE 802.11 WiFi-based waveforms, the receiver performs coarse time and frequency synchronization using the first field of the preamble known as the legacy short training field (L-STF)
With the goal of reducing communication overhead, we propose a modified waveform, where the preamble length is reduced by eliminating the L-STF.
To decode this modified waveform, we propose a machine learning (ML)-based scheme called PRONTO that performs coarse time and frequency estimations.
- Score: 1.242591017155152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In IEEE 802.11 WiFi-based waveforms, the receiver performs coarse time and
frequency synchronization using the first field of the preamble known as the
legacy short training field (L-STF). The L-STF occupies upto 40% of the
preamble length and takes upto 32 us of airtime. With the goal of reducing
communication overhead, we propose a modified waveform, where the preamble
length is reduced by eliminating the L-STF. To decode this modified waveform,
we propose a machine learning (ML)-based scheme called PRONTO that performs
coarse time and frequency estimations using other preamble fields, specifically
the legacy long training field (L-LTF). Our contributions are threefold: (i) We
present PRONTO featuring customized convolutional neural networks (CNNs) for
packet detection and coarse CFO estimation, along with data augmentation steps
for robust training. (ii) We propose a generalized decision flow that makes
PRONTO compatible with legacy waveforms that include the standard L-STF. (iii)
We validate the outcomes on an over-the-air WiFi dataset from a testbed of
software defined radios (SDRs). Our evaluations show that PRONTO can perform
packet detection with 100% accuracy, and coarse CFO estimation with errors as
small as 3%. We demonstrate that PRONTO provides upto 40% preamble length
reduction with no bit error rate (BER) degradation. Finally, we experimentally
show the speedup achieved by PRONTO through GPU parallelization over the
corresponding CPU-only implementations.
Related papers
- Gradient Sparsification for Efficient Wireless Federated Learning with
Differential Privacy [25.763777765222358]
Federated learning (FL) enables distributed clients to collaboratively train a machine learning model without sharing raw data with each other.
As the model size grows, the training latency due to limited transmission bandwidth and private information degrades while using differential privacy (DP) protection.
We propose sparsification empowered FL framework wireless channels, in over to improve training efficiency without sacrificing convergence performance.
arXiv Detail & Related papers (2023-04-09T05:21:15Z) - Decoder Tuning: Efficient Language Understanding as Decoding [84.68266271483022]
We present Decoder Tuning (DecT), which in contrast optimize task-specific decoder networks on the output side.
By gradient-based optimization, DecT can be trained within several seconds and requires only one P query per sample.
We conduct extensive natural language understanding experiments and show that DecT significantly outperforms state-of-the-art algorithms with a $200times$ speed-up.
arXiv Detail & Related papers (2022-12-16T11:15:39Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - NAF: Neural Attenuation Fields for Sparse-View CBCT Reconstruction [79.13750275141139]
This paper proposes a novel and fast self-supervised solution for sparse-view CBCT reconstruction.
The desired attenuation coefficients are represented as a continuous function of 3D spatial coordinates, parameterized by a fully-connected deep neural network.
A learning-based encoder entailing hash coding is adopted to help the network capture high-frequency details.
arXiv Detail & Related papers (2022-09-29T04:06:00Z) - Extending GCC-PHAT using Shift Equivariant Neural Networks [17.70159660438739]
Methods based on the generalized cross correlation with phase transform (GCC-PHAT) have been widely adopted for speaker localization.
We propose a novel approach to extending the GCC-PHAT, where the received signals are filtered using a shift equivariant neural network.
We show that our model consistently reduces the error of the GCC-PHAT in adverse environments, with guarantees of exact time delay recovery.
arXiv Detail & Related papers (2022-08-09T10:31:10Z) - Deep Learning-Based Synchronization for Uplink NB-IoT [72.86843435313048]
We propose a neural network (NN)-based algorithm for device detection and time of arrival (ToA) estimation for the narrowband physical random-access channel (NPRACH) of narrowband internet of things (NB-IoT)
The introduced NN architecture leverages residual convolutional networks as well as knowledge of the preamble structure of the 5G New Radio (5G NR) specifications.
arXiv Detail & Related papers (2022-05-22T12:16:43Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z) - Adaptive Transmission Scheduling in Wireless Networks for Asynchronous
Federated Learning [13.490583662839725]
We study asynchronous federated learning (FL) in a wireless learning network (WDLN)
We formulate an Asynchronous Learning-aware transmission Scheduling (ALS) problem to maximize the effectivity score.
We show via simulations that the models trained by our ALS algorithms achieve performances close to that by an ideal benchmark.
arXiv Detail & Related papers (2021-03-02T02:28:20Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.