Learning Representations for CSI Adaptive Quantization and Feedback
- URL: http://arxiv.org/abs/2207.06924v1
- Date: Wed, 13 Jul 2022 08:52:13 GMT
- Title: Learning Representations for CSI Adaptive Quantization and Feedback
- Authors: Valentina Rizzello, Matteo Nerini, Michael Joham, Bruno Clerckx and
Wolfgang Utschick
- Abstract summary: We propose an efficient method for adaptive quantization and feedback in frequency division duplexing systems.
Existing works mainly focus on the implementation of autoencoder (AE) neural networks for CSI compression.
We recommend two different methods: one based on a post training quantization and the second one in which the codebook is found during the training of the AE.
- Score: 51.14360605938647
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we propose an efficient method for channel state information
(CSI) adaptive quantization and feedback in frequency division duplexing (FDD)
systems. Existing works mainly focus on the implementation of autoencoder (AE)
neural networks (NNs) for CSI compression, and consider straightforward
quantization methods, e.g., uniform quantization, which are generally not
optimal. With this strategy, it is hard to achieve a low reconstruction error,
especially, when the available number of bits reserved for the latent space
quantization is small. To address this issue, we recommend two different
methods: one based on a post training quantization and the second one in which
the codebook is found during the training of the AE. Both strategies achieve
better reconstruction accuracy compared to standard quantization techniques.
Related papers
- Enhancing the performance of Variational Quantum Classifiers with hybrid autoencoders [0.0]
We propose an alternative method which reduces the dimensionality of a given dataset by taking into account the specific quantum embedding that comes after.
This method aspires to make quantum machine learning with VQCs more versatile and effective on datasets of high dimension.
arXiv Detail & Related papers (2024-09-05T08:51:20Z) - Quantification using Permutation-Invariant Networks based on Histograms [47.47360392729245]
Quantification is the supervised learning task in which a model is trained to predict the prevalence of each class in a given bag of examples.
This paper investigates the application of deep neural networks to tasks of quantification in scenarios where it is possible to apply a symmetric supervised approach.
We propose HistNetQ, a novel neural architecture that relies on a permutation-invariant representation based on histograms.
arXiv Detail & Related papers (2024-03-22T11:25:38Z) - Quantization Adaptor for Bit-Level Deep Learning-Based Massive MIMO CSI
Feedback [9.320559153486885]
In massive multiple-input multiple-output (MIMO) systems, the user equipment (UE) needs to feed the channel state information (CSI) back to the base station (BS) for the following beamforming.
Deep learning (DL) based methods can compress the CSI at the UE and recover it at the BS, which reduces the feedback cost significantly.
In this paper, we propose an adaptor-assisted quantization strategy for bit-level DL-based CSI feedback.
arXiv Detail & Related papers (2022-11-05T16:30:59Z) - A Comprehensive Survey on Model Quantization for Deep Neural Networks in
Image Classification [0.0]
A promising approach is quantization, in which the full-precision values are stored in low bit-width precision.
We present a comprehensive survey of quantization concepts and methods, with a focus on image classification.
We explain the replacement of floating-point operations with low-cost bitwise operations in a quantized DNN and the sensitivity of different layers in quantization.
arXiv Detail & Related papers (2022-05-14T15:08:32Z) - Cluster-Promoting Quantization with Bit-Drop for Minimizing Network
Quantization Loss [61.26793005355441]
Cluster-Promoting Quantization (CPQ) finds the optimal quantization grids for neural networks.
DropBits is a new bit-drop technique that revises the standard dropout regularization to randomly drop bits instead of neurons.
We experimentally validate our method on various benchmark datasets and network architectures.
arXiv Detail & Related papers (2021-09-05T15:15:07Z) - Post-Training Quantization for Vision Transformer [85.57953732941101]
We present an effective post-training quantization algorithm for reducing the memory storage and computational costs of vision transformers.
We can obtain an 81.29% top-1 accuracy using DeiT-B model on ImageNet dataset with about 8-bit quantization.
arXiv Detail & Related papers (2021-06-27T06:27:22Z) - Training Multi-bit Quantized and Binarized Networks with A Learnable
Symmetric Quantizer [1.9659095632676098]
Quantizing weights and activations of deep neural networks is essential for deploying them in resource-constrained devices or cloud platforms.
While binarization is a special case of quantization, this extreme case often leads to several training difficulties.
We develop a unified quantization framework, denoted as UniQ, to overcome binarization difficulties.
arXiv Detail & Related papers (2021-04-01T02:33:31Z) - DAQ: Distribution-Aware Quantization for Deep Image Super-Resolution
Networks [49.191062785007006]
Quantizing deep convolutional neural networks for image super-resolution substantially reduces their computational costs.
Existing works either suffer from a severe performance drop in ultra-low precision of 4 or lower bit-widths, or require a heavy fine-tuning process to recover the performance.
We propose a novel distribution-aware quantization scheme (DAQ) which facilitates accurate training-free quantization in ultra-low precision.
arXiv Detail & Related papers (2020-12-21T10:19:42Z) - Optimal Gradient Quantization Condition for Communication-Efficient
Distributed Training [99.42912552638168]
Communication of gradients is costly for training deep neural networks with multiple devices in computer vision applications.
In this work, we deduce the optimal condition of both the binary and multi-level gradient quantization for textbfANY gradient distribution.
Based on the optimal condition, we develop two novel quantization schemes: biased BinGrad and unbiased ORQ for binary and multi-level gradient quantization respectively.
arXiv Detail & Related papers (2020-02-25T18:28:39Z) - Gradient $\ell_1$ Regularization for Quantization Robustness [70.39776106458858]
We derive a simple regularization scheme that improves robustness against post-training quantization.
By training quantization-ready networks, our approach enables storing a single set of weights that can be quantized on-demand to different bit-widths.
arXiv Detail & Related papers (2020-02-18T12:31:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.