Channel Estimation under Hardware Impairments: Bayesian Methods versus
Deep Learning
- URL: http://arxiv.org/abs/2208.04033v1
- Date: Mon, 8 Aug 2022 10:32:32 GMT
- Title: Channel Estimation under Hardware Impairments: Bayesian Methods versus
Deep Learning
- Authors: \"Ozlem Tugfe Demir and Emil Bj\"ornson
- Abstract summary: A deep feedforward neural network is designed and trained to estimate the effective channels.
Its performance is compared with state-of-the-art distortion-aware and unaware Bayesian linear minimum mean-squared error (LMMSE) estimators.
- Score: 2.055949720959582
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper considers the impact of general hardware impairments in a
multiple-antenna base station and user equipments on the uplink performance.
First, the effective channels are analytically derived for distortion-aware
receivers when using finite-sized signal constellations. Next, a deep
feedforward neural network is designed and trained to estimate the effective
channels. Its performance is compared with state-of-the-art distortion-aware
and unaware Bayesian linear minimum mean-squared error (LMMSE) estimators. The
proposed deep learning approach improves the estimation quality by exploiting
impairment characteristics, while LMMSE methods treat distortion as noise.
Related papers
- Joint Channel Estimation and Feedback with Masked Token Transformers in
Massive MIMO Systems [74.52117784544758]
This paper proposes an encoder-decoder based network that unveils the intrinsic frequency-domain correlation within the CSI matrix.
The entire encoder-decoder network is utilized for channel compression.
Our method outperforms state-of-the-art channel estimation and feedback techniques in joint tasks.
arXiv Detail & Related papers (2023-06-08T06:15:17Z) - An Efficient Machine Learning-based Channel Prediction Technique for
OFDM Sub-Bands [0.0]
We propose an efficient machine learning (ML)-based technique for channel prediction in OFDM sub-bands.
The novelty of the proposed approach lies in the training of channel fading samples used to estimate future channel behaviour in selective fading.
arXiv Detail & Related papers (2023-05-31T09:41:27Z) - Efficient Deep Unfolding for SISO-OFDM Channel Estimation [0.0]
It is possible to perform SISO-OFDM channel estimation using sparse recovery techniques.
In this paper, an unfolded neural network is used to lighten this constraint.
Its unsupervised online learning allows to learn the system's imperfections in order to enhance the estimation performance.
arXiv Detail & Related papers (2022-10-11T11:29:54Z) - Model-based Deep Learning Receiver Design for Rate-Splitting Multiple
Access [65.21117658030235]
This work proposes a novel design for a practical RSMA receiver based on model-based deep learning (MBDL) methods.
The MBDL receiver is evaluated in terms of uncoded Symbol Error Rate (SER), throughput performance through Link-Level Simulations (LLS) and average training overhead.
Results reveal that the MBDL outperforms by a significant margin the SIC receiver with imperfect CSIR.
arXiv Detail & Related papers (2022-05-02T12:23:55Z) - Learning to Perform Downlink Channel Estimation in Massive MIMO Systems [72.76968022465469]
We study downlink (DL) channel estimation in a Massive multiple-input multiple-output (MIMO) system.
A common approach is to use the mean value as the estimate, motivated by channel hardening.
We propose two novel estimation methods.
arXiv Detail & Related papers (2021-09-06T13:42:32Z) - Learning to Estimate RIS-Aided mmWave Channels [50.15279409856091]
We focus on uplink cascaded channel estimation, where known and fixed base station combining and RIS phase control matrices are considered for collecting observations.
To boost the estimation performance and reduce the training overhead, the inherent channel sparsity of mmWave channels is leveraged in the deep unfolding method.
It is verified that the proposed deep unfolding network architecture can outperform the least squares (LS) method with a relatively smaller training overhead and online computational complexity.
arXiv Detail & Related papers (2021-07-27T06:57:56Z) - Channel Estimation via Successive Denoising in MIMO OFDM Systems: A Reinforcement Learning Approach [23.57305243608369]
We present a frequency-domain denoising method based on a reinforcement learning framework.
Our algorithm provides a significant improvement over the practical least squares (LS) estimation method.
arXiv Detail & Related papers (2021-01-25T18:33:54Z) - Deep Denoising Neural Network Assisted Compressive Channel Estimation
for mmWave Intelligent Reflecting Surfaces [99.34306447202546]
This paper proposes a deep denoising neural network assisted compressive channel estimation for mmWave IRS systems.
We first introduce a hybrid passive/active IRS architecture, where very few receive chains are employed to estimate the uplink user-to-IRS channels.
The complete channel matrix can be reconstructed from the limited measurements based on compressive sensing.
arXiv Detail & Related papers (2020-06-03T12:18:57Z) - Data-Driven Symbol Detection via Model-Based Machine Learning [117.58188185409904]
We review a data-driven framework to symbol detection design which combines machine learning (ML) and model-based algorithms.
In this hybrid approach, well-known channel-model-based algorithms are augmented with ML-based algorithms to remove their channel-model-dependence.
Our results demonstrate that these techniques can yield near-optimal performance of model-based algorithms without knowing the exact channel input-output statistical relationship.
arXiv Detail & Related papers (2020-02-14T06:58:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.