Model-Driven Deep Learning for Non-Coherent Massive Machine-Type
Communications
- URL: http://arxiv.org/abs/2301.00516v1
- Date: Mon, 2 Jan 2023 04:02:32 GMT
- Title: Model-Driven Deep Learning for Non-Coherent Massive Machine-Type
Communications
- Authors: Zhe Ma, Wen Wu, Feifei Gao, Xuemin (Sherman) Shen
- Abstract summary: We investigate the joint device activity and data detection in massive machine-type communications (mMTC) with a one-phase non-coherent scheme.
Due to the correlated sparsity pattern introduced by the non-coherent transmission scheme, the traditional approximate message passing (AMP) algorithm cannot achieve satisfactory performance.
We propose a deep learning modified AMP network (DL-mAMPnet) that enhances the detection performance by effectively exploiting the pilot activity correlation.
- Score: 37.35929546347294
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, we investigate the joint device activity and data detection in
massive machine-type communications (mMTC) with a one-phase non-coherent
scheme, where data bits are embedded in the pilot sequences and the base
station simultaneously detects active devices and their embedded data bits
without explicit channel estimation. Due to the correlated sparsity pattern
introduced by the non-coherent transmission scheme, the traditional approximate
message passing (AMP) algorithm cannot achieve satisfactory performance.
Therefore, we propose a deep learning (DL) modified AMP network (DL-mAMPnet)
that enhances the detection performance by effectively exploiting the pilot
activity correlation. The DL-mAMPnet is constructed by unfolding the AMP
algorithm into a feedforward neural network, which combines the principled
mathematical model of the AMP algorithm with the powerful learning capability,
thereby benefiting from the advantages of both techniques. Trainable parameters
are introduced in the DL-mAMPnet to approximate the correlated sparsity pattern
and the large-scale fading coefficient. Moreover, a refinement module is
designed to further advance the performance by utilizing the spatial feature
caused by the correlated sparsity pattern. Simulation results demonstrate that
the proposed DL-mAMPnet can significantly outperform traditional algorithms in
terms of the symbol error rate performance.
Related papers
- Adaptive Anomaly Detection in Network Flows with Low-Rank Tensor Decompositions and Deep Unrolling [9.20186865054847]
Anomaly detection (AD) is increasingly recognized as a key component for ensuring the resilience of future communication systems.
This work considers AD in network flows using incomplete measurements.
We propose a novel block-successive convex approximation algorithm based on a regularized model-fitting objective.
Inspired by Bayesian approaches, we extend the model architecture to perform online adaptation to per-flow and per-time-step statistics.
arXiv Detail & Related papers (2024-09-17T19:59:57Z) - Online Variational Sequential Monte Carlo [49.97673761305336]
We build upon the variational sequential Monte Carlo (VSMC) method, which provides computationally efficient and accurate model parameter estimation and Bayesian latent-state inference.
Online VSMC is capable of performing efficiently, entirely on-the-fly, both parameter estimation and particle proposal adaptation.
arXiv Detail & Related papers (2023-12-19T21:45:38Z) - Model-based Deep Learning Receiver Design for Rate-Splitting Multiple
Access [65.21117658030235]
This work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods.
The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS) and average training overhead.
Results reveal that the MBDL outperforms by a significant margin the SIC receiver with imperfect CSIR.
arXiv Detail & Related papers (2022-05-02T12:23:55Z) - Accurate Discharge Coefficient Prediction of Streamlined Weirs by
Coupling Linear Regression and Deep Convolutional Gated Recurrent Unit [2.4475596711637433]
The present study proposes data-driven modeling techniques, as an alternative to CFD simulation, to predict the discharge coefficient based on an experimental dataset.
It is found that the proposed three layer hierarchical DL algorithm consists of a convolutional layer coupled with two subsequent GRU levels, which is also hybridized with the LR method, leads to lower error metrics.
arXiv Detail & Related papers (2022-04-12T01:59:36Z) - Time-Correlated Sparsification for Efficient Over-the-Air Model
Aggregation in Wireless Federated Learning [23.05003652536773]
Federated edge learning (FEEL) is a promising distributed machine learning (ML) framework to drive edge intelligence applications.
We propose time-correlated sparsification with hybrid aggregation (TCS-H) for communication-efficient FEEL.
arXiv Detail & Related papers (2022-02-17T02:48:07Z) - Hybridization of Capsule and LSTM Networks for unsupervised anomaly
detection on multivariate data [0.0]
This paper introduces a novel NN architecture which hybridises the Long-Short-Term-Memory (LSTM) and Capsule Networks into a single network.
The proposed method uses an unsupervised learning technique to overcome the issues with finding large volumes of labelled training data.
arXiv Detail & Related papers (2022-02-11T10:33:53Z) - Learning to Perform Downlink Channel Estimation in Massive MIMO Systems [72.76968022465469]
We study downlink (DL) channel estimation in a Massive multiple-input multiple-output (MIMO) system.
A common approach is to use the mean value as the estimate, motivated by channel hardening.
We propose two novel estimation methods.
arXiv Detail & Related papers (2021-09-06T13:42:32Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Data-Driven Symbol Detection via Model-Based Machine Learning [117.58188185409904]
We review a data-driven framework to symbol detection design which combines machine learning (ML) and model-based algorithms.
In this hybrid approach, well-known channel-model-based algorithms are augmented with ML-based algorithms to remove their channel-model-dependence.
Our results demonstrate that these techniques can yield near-optimal performance of model-based algorithms without knowing the exact channel input-output statistical relationship.
arXiv Detail & Related papers (2020-02-14T06:58:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.