Tensor Learning-based Precoder Codebooks for FD-MIMO Systems
- URL: http://arxiv.org/abs/2106.11374v1
- Date: Mon, 21 Jun 2021 19:18:39 GMT
- Title: Tensor Learning-based Precoder Codebooks for FD-MIMO Systems
- Authors: Keerthana Bhogi, Chiranjib Saha, and Harpreet S. Dhillon
- Abstract summary: This paper develops an efficient procedure for designing low-complexity codebooks for precoding in a full-dimension (FD) multiple-input multiple-output (MIMO) system.
We utilize a model-free data-driven approach with foundations in machine learning to generate codebooks that adapt to the surrounding propagation conditions.
- Score: 47.562560779723334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper develops an efficient procedure for designing low-complexity
codebooks for precoding in a full-dimension (FD) multiple-input multiple-output
(MIMO) system with a uniform planar array (UPA) antenna at the transmitter (Tx)
using tensor learning. In particular, instead of using statistical channel
models, we utilize a model-free data-driven approach with foundations in
machine learning to generate codebooks that adapt to the surrounding
propagation conditions. We use a tensor representation of the FD-MIMO channel
and exploit its properties to design quantized version of the channel
precoders. We find the best representation of the optimal precoder as a
function of Kronecker Product (KP) of two low-dimensional precoders,
respectively corresponding to the horizontal and vertical dimensions of the
UPA, obtained from the tensor decomposition of the channel. We then quantize
this precoder to design product codebooks such that an average loss in mutual
information due to quantization of channel state information (CSI) is
minimized. The key technical contribution lies in exploiting the constraints on
the precoders to reduce the product codebook design problem to an unsupervised
clustering problem on a Cartesian Product Grassmann manifold (CPM), where the
cluster centroids form a finite-sized precoder codebook. This codebook can be
found efficiently by running a $K$-means clustering on the CPM. With a suitable
induced distance metric on the CPM, we show that the construction of product
codebooks is equivalent to finding the optimal set of centroids on the factor
manifolds corresponding to the horizontal and vertical dimensions. Simulation
results are presented to demonstrate the capability of the proposed design
criterion in learning the codebooks and the attractive performance of the
designed codebooks.
Related papers
- Factor Graph Optimization of Error-Correcting Codes for Belief Propagation Decoding [62.25533750469467]
Low-Density Parity-Check (LDPC) codes possess several advantages over other families of codes.
The proposed approach is shown to outperform the decoding performance of existing popular codes by orders of magnitude.
arXiv Detail & Related papers (2024-06-09T12:08:56Z) - Ultrafast jet classification on FPGAs for the HL-LHC [33.87493147633063]
Three machine learning models are used to perform jet origin classification.
These models are optimized for deployment on a field-programmable gate array device.
arXiv Detail & Related papers (2024-02-02T20:02:12Z) - Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding
for Ising MRF Models: Classical and Quantum Topology Machine Learning [0.11805137592431453]
The paper introduces the application of information geometry to describe the ground states of Ising models.
The approach establishes a connection between machine learning and error-correcting coding.
arXiv Detail & Related papers (2023-07-28T19:38:13Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Fundamental Limits of Two-layer Autoencoders, and Achieving Them with
Gradient Methods [91.54785981649228]
This paper focuses on non-linear two-layer autoencoders trained in the challenging proportional regime.
Our results characterize the minimizers of the population risk, and show that such minimizers are achieved by gradient methods.
For the special case of a sign activation function, our analysis establishes the fundamental limits for the lossy compression of Gaussian sources via (shallow) autoencoders.
arXiv Detail & Related papers (2022-12-27T12:37:34Z) - Massive MIMO Beam Management in Sub-6 GHz 5G NR [46.71738320970658]
Beam codebooks are a new feature of massive multiple-input multiple-output (M-MIMO) in 5G new radio (NR)
We show that machine learning can be used to train site-specific codebooks for initial access.
arXiv Detail & Related papers (2022-04-12T19:51:43Z) - Learning on a Grassmann Manifold: CSI Quantization for Massive MIMO
Systems [37.499485219254545]
This paper focuses on the design of beamforming codebooks that maximize the average normalized beamforming gain for any underlying channel distribution.
We utilize a model-free data-driven approach with foundations in machine learning to generate beamforming codebooks that adapt to the surrounding propagation conditions.
arXiv Detail & Related papers (2020-05-18T01:01:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.