Enhancing Deep Learning Performance of Massive MIMO CSI Feedback
- URL: http://arxiv.org/abs/2208.11333v1
- Date: Wed, 24 Aug 2022 07:08:31 GMT
- Title: Enhancing Deep Learning Performance of Massive MIMO CSI Feedback
- Authors: Sijie Ji, Mo Li
- Abstract summary: We propose a jigsaw puzzles aided training strategy (JPTS) to enhance the deep learning-based Massive multiple-input multiple-output (MIMO) CSI feedback approaches.
Experimental results show that by adopting this training strategy, the accuracy can be boosted by 12.07% and 7.01% on average in indoor and outdoor environments.
- Score: 7.63185216082836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: CSI feedback is an important problem of Massive multiple-input
multiple-output (MIMO) technology because the feedback overhead is proportional
to the number of sub-channels and the number of antennas, both of which scale
with the size of the Massive MIMO system. Deep learning-based CSI feedback
methods have been widely adopted recently owing to their superior performance.
Despite the success, current approaches have not fully exploited the
relationship between the characteristics of CSI data and the deep learning
framework. In this paper, we propose a jigsaw puzzles aided training strategy
(JPTS) to enhance the deep learning-based Massive MIMO CSI feedback approaches
by maximizing mutual information between the original CSI and the compressed
CSI. We apply JPTS on top of existing state-of-the-art methods. Experimental
results show that by adopting this training strategy, the accuracy can be
boosted by 12.07% and 7.01% on average in indoor and outdoor environments,
respectively. The proposed method is ready to adopt to existing deep learning
frameworks of Massive MIMO CSI feedback. Codes of JPTS are available on GitHub
for reproducibility.
Related papers
- Physics-Inspired Deep Learning Anti-Aliasing Framework in Efficient
Channel State Feedback [25.68689988641748]
This work introduces a new CSI upsampling framework at the gNB as a post-processing solution to address the gaps caused by undersampling.
We also develop a learning-based method that integrates the proposed algorithm with the Iterative Shrinkage-Thresholding Algorithm Net (ISTA-Net) architecture.
Our numerical results show that both our rule-based and deep learning methods significantly outperform traditional techniques and current state-of-the-art approaches in terms of performance.
arXiv Detail & Related papers (2024-03-12T23:40:51Z) - A Low-Overhead Incorporation-Extrapolation based Few-Shot CSI Feedback Framework for Massive MIMO Systems [45.22132581755417]
Accurate channel state information (CSI) is essential for downlink precoding in frequency division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems.
However, obtaining CSI through feedback from the user equipment (UE) becomes challenging with the increasing scale of antennas and subcarriers.
Deep learning-based methods have emerged for compressing CSI but these methods require substantial collected samples.
Existing deep learning methods also suffer from dramatically growing feedback overhead owing to their focus on full-dimensional CSI feedback.
We propose a low-overhead-Extrapolation based Few-Shot CSI
arXiv Detail & Related papers (2023-12-07T06:01:47Z) - Joint Channel Estimation and Feedback with Masked Token Transformers in
Massive MIMO Systems [74.52117784544758]
This paper proposes an encoder-decoder based network that unveils the intrinsic frequency-domain correlation within the CSI matrix.
The entire encoder-decoder network is utilized for channel compression.
Our method outperforms state-of-the-art channel estimation and feedback techniques in joint tasks.
arXiv Detail & Related papers (2023-06-08T06:15:17Z) - Learning Representations for CSI Adaptive Quantization and Feedback [51.14360605938647]
We propose an efficient method for adaptive quantization and feedback in frequency division duplexing systems.
Existing works mainly focus on the implementation of autoencoder (AE) neural networks for CSI compression.
We recommend two different methods: one based on a post training quantization and the second one in which the codebook is found during the training of the AE.
arXiv Detail & Related papers (2022-07-13T08:52:13Z) - Overview of Deep Learning-based CSI Feedback in Massive MIMO Systems [77.0986534024972]
Deep learning (DL)-based CSI feedback refers to CSI compression and reconstruction by a DL-based autoencoder and can greatly reduce feedback overhead.
The focus is on novel neural network architectures and utilization of communication expert knowledge to improve CSI feedback accuracy.
arXiv Detail & Related papers (2022-06-29T03:28:57Z) - Deep Learning-based Implicit CSI Feedback in Massive MIMO [68.81204537021821]
We propose a DL-based implicit feedback architecture to inherit the low-overhead characteristic, which uses neural networks (NNs) to replace the precoding matrix indicator (PMI) encoding and decoding modules.
For a single resource block (RB), the proposed architecture can save 25.0% and 40.0% of overhead compared with Type I codebook under two antenna configurations.
arXiv Detail & Related papers (2021-05-21T02:43:02Z) - CLNet: Complex Input Lightweight Neural Network designed for Massive
MIMO CSI Feedback [7.63185216082836]
This paper presents a novel neural network CLNet tailored for CSI feedback problem based on the intrinsic properties of CSI.
The experiment result shows that CLNet outperforms the state-of-the-art method by average accuracy improvement of 5.41% in both outdoor and indoor scenarios.
arXiv Detail & Related papers (2021-02-15T12:16:11Z) - Deep Learning for Massive MIMO Channel State Acquisition and Feedback [7.111650988432555]
Massive multiple-input multiple-output (MIMO) systems are a main enabler of the excessive throughput requirements in 5G and future generation wireless networks.
They require accurate and timely channel state information (CSI), which is acquired by a training process.
This paper provides an overview of how neural networks (NNs) can be used in the training process to improve the performance by reducing the CSI acquisition overhead and to reduce complexity.
arXiv Detail & Related papers (2020-02-17T13:16:34Z) - Large Batch Training Does Not Need Warmup [111.07680619360528]
Training deep neural networks using a large batch size has shown promising results and benefits many real-world applications.
In this paper, we propose a novel Complete Layer-wise Adaptive Rate Scaling (CLARS) algorithm for large-batch training.
Based on our analysis, we bridge the gap and illustrate the theoretical insights for three popular large-batch training techniques.
arXiv Detail & Related papers (2020-02-04T23:03:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.