Deep Receiver Design for Multi-carrier Waveforms Using CNNs
- URL: http://arxiv.org/abs/2006.02226v1
- Date: Tue, 2 Jun 2020 10:29:05 GMT
- Title: Deep Receiver Design for Multi-carrier Waveforms Using CNNs
- Authors: Yasin Yildirim, Sedat Ozer, Hakan Ali Cirpan
- Abstract summary: We propose to use a convolutional neural network (CNN) for jointly detection and demodulation of the received signal at the receiver in wireless environments.
We compare our proposed architecture to the classical methods and demonstrate that our proposed CNN-based architecture can perform better on different multi-carrier forms.
- Score: 8.9379057739817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, a deep learning based receiver is proposed for a collection of
multi-carrier wave-forms including both current and next-generation wireless
communication systems. In particular, we propose to use a convolutional neural
network (CNN) for jointly detection and demodulation of the received signal at
the receiver in wireless environments. We compare our proposed architecture to
the classical methods and demonstrate that our proposed CNN-based architecture
can perform better on different multi-carrier forms including OFDM and GFDM in
various simulations. Furthermore, we compare the total number of required
parameters for each network for memory requirements.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - Deep Learning-Based Frequency Offset Estimation [7.143765507026541]
We show the utilization of deep learning for CFO estimation by employing a residual network (ResNet) to learn and extract signal features.
In comparison to the commonly used traditional CFO estimation methods, our proposed IQ-ResNet method exhibits superior performance across various scenarios.
arXiv Detail & Related papers (2023-11-08T13:56:22Z) - On Neural Architectures for Deep Learning-based Source Separation of
Co-Channel OFDM Signals [104.11663769306566]
We study the single-channel source separation problem involving frequency-division multiplexing (OFDM) signals.
We propose critical domain-informed modifications to the network parameterization, based on insights from OFDM structures.
arXiv Detail & Related papers (2023-03-11T16:29:13Z) - Streamable Neural Fields [5.404549859703572]
We propose streamable neural fields, a single model that consists of executable sub-networks of various widths.
The proposed architectural and training techniques enable a single network to be streamable over time and reconstruct different qualities and parts of signals.
Experimental results have shown the effectiveness of our method in various domains, such as 2D images, videos, and 3D signed distance functions.
arXiv Detail & Related papers (2022-07-20T05:42:02Z) - Neural Network-based OFDM Receiver for Resource Constrained IoT Devices [44.8697473676516]
We explore a novel, modular Machine Learning (ML)-based receiver chain design for the Internet of Things (IoT)
ML blocks replace the individual processing blocks of an OFDM receiver, and we describe this swapping for the legacy channel estimation, symbol demapping, and decoding blocks with Neural Networks (NNs)
Our evaluations demonstrate that the proposed modular NN-based receiver improves bit error rate of the traditional non-ML receiver by averagely 61% and 10% for the simulated and over-the-air datasets, respectively.
arXiv Detail & Related papers (2022-05-12T15:32:35Z) - Multi-task Learning Approach for Modulation and Wireless Signal
Classification for 5G and Beyond: Edge Deployment via Model Compression [1.218340575383456]
Future communication networks must address the scarce spectrum to accommodate growth of heterogeneous wireless devices.
We exploit the potential of deep neural networks based multi-task learning framework to simultaneously learn modulation and signal classification tasks.
We provide a comprehensive heterogeneous wireless signals dataset for public use.
arXiv Detail & Related papers (2022-02-26T14:51:02Z) - Learning to Estimate RIS-Aided mmWave Channels [50.15279409856091]
We focus on uplink cascaded channel estimation, where known and fixed base station combining and RIS phase control matrices are considered for collecting observations.
To boost the estimation performance and reduce the training overhead, the inherent channel sparsity of mmWave channels is leveraged in the deep unfolding method.
It is verified that the proposed deep unfolding network architecture can outperform the least squares (LS) method with a relatively smaller training overhead and online computational complexity.
arXiv Detail & Related papers (2021-07-27T06:57:56Z) - LoRD-Net: Unfolded Deep Detection Network with Low-Resolution Receivers [104.01415343139901]
We propose a deep detector entitled LoRD-Net for recovering information symbols from one-bit measurements.
LoRD-Net has a task-based architecture dedicated to recovering the underlying signal of interest.
We evaluate the proposed receiver architecture for one-bit signal recovery in wireless communications.
arXiv Detail & Related papers (2021-02-05T04:26:05Z) - Learning to Beamform in Heterogeneous Massive MIMO Networks [48.62625893368218]
It is well-known problem of finding the optimal beamformers in massive multiple-input multiple-output (MIMO) networks.
We propose a novel deep learning based paper algorithm to address this problem.
arXiv Detail & Related papers (2020-11-08T12:48:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.