Variational Autoencoders for Studying the Manifold of Precoding Matrices
with High Spectral Efficiency
- URL: http://arxiv.org/abs/2111.15626v2
- Date: Wed, 1 Dec 2021 06:41:39 GMT
- Title: Variational Autoencoders for Studying the Manifold of Precoding Matrices
with High Spectral Efficiency
- Authors: Evgeny Bobrov (1 and 2), Alexander Markov (3), Dmitry Vetrov (3) ((1)
Moscow Research Center, Huawei Technologies, Russia, (2) M. V. Lomonosov
Moscow State University, Russia, (3) National Research University Higher
School of Economics, Russia)
- Abstract summary: We look at how to use a variational autoencoder to find a precoding matrix with a high Spectral Efficiency (SE)
Our objective is to create a less time-consuming algorithm with minimum quality degradation.
- Score: 47.187609203210705
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In multiple-input multiple-output (MIMO) wireless communications systems,
neural networks have been employed for channel decoding, detection, channel
estimation, and resource management. In this paper, we look at how to use a
variational autoencoder to find a precoding matrix with a high Spectral
Efficiency (SE). To collect optimal precoding matrices, an optimization
approach is used. Our objective is to create a less time-consuming algorithm
with minimum quality degradation. To build precoding matrices, we employed two
forms of variational autoencoders: conventional variational autoencoders (VAE)
and conditional variational autoencoders (CVAE). Both methods may be used to
study a wide range of optimal precoding matrices. To the best of our knowledge,
the development of precoding matrices for the spectral efficiency objective
function (SE) utilising VAE and CVAE methods is being published for the first
time.
Related papers
- Learning Linear Block Error Correction Codes [62.25533750469467]
We propose for the first time a unified encoder-decoder training of binary linear block codes.
We also propose a novel Transformer model in which the self-attention masking is performed in a differentiable fashion for the efficient backpropagation of the code gradient.
arXiv Detail & Related papers (2024-05-07T06:47:12Z) - Machine Learning-Aided Efficient Decoding of Reed-Muller Subcodes [59.55193427277134]
Reed-Muller (RM) codes achieve the capacity of general binary-input memoryless symmetric channels.
RM codes only admit limited sets of rates.
Efficient decoders are available for RM codes at finite lengths.
arXiv Detail & Related papers (2023-01-16T04:11:14Z) - On the Use of Modality-Specific Large-Scale Pre-Trained Encoders for
Multimodal Sentiment Analysis [27.497457891521538]
Methods with domain-specific pre-trained encoders attain better performance than those with conventional features in both unimodal and multimodal scenarios.
We also find it better to use the outputs of the intermediate layers of the encoders than those of the output layer.
arXiv Detail & Related papers (2022-10-28T06:48:35Z) - Efficient Nearest Neighbor Search for Cross-Encoder Models using Matrix
Factorization [60.91600465922932]
We present an approach that avoids the use of a dual-encoder for retrieval, relying solely on the cross-encoder.
Our approach provides test-time recall-vs-computational cost trade-offs superior to the current widely-used methods.
arXiv Detail & Related papers (2022-10-23T00:32:04Z) - String-based Molecule Generation via Multi-decoder VAE [56.465033997245776]
We investigate the problem of string-based molecular generation via variational autoencoders (VAEs)
We propose a simple, yet effective idea to improve the performance of VAE for the task.
In our experiments, the proposed VAE model particularly performs well for generating a sample from out-of-domain distribution.
arXiv Detail & Related papers (2022-08-23T03:56:30Z) - A new Sparse Auto-encoder based Framework using Grey Wolf Optimizer for
Data Classification Problem [0.0]
Gray wolf optimization (GWO) is applied to train sparse auto-encoders.
Model is validated by employing several popular Gene expression databases.
Results reveal that the performance of the trained model using GWO outperforms on both conventional models and models trained with most popular metaheuristic algorithms.
arXiv Detail & Related papers (2022-01-29T04:28:30Z) - Simple and Effective VAE Training with Calibrated Decoders [123.08908889310258]
Variational autoencoders (VAEs) provide an effective and simple method for modeling complex distributions.
We study the impact of calibrated decoders, which learn the uncertainty of the decoding distribution.
We propose a simple but novel modification to the commonly used Gaussian decoder, which computes the prediction variance analytically.
arXiv Detail & Related papers (2020-06-23T17:57:47Z) - Encoder blind combinatorial compressed sensing [5.177947445379688]
We consider the problem of designing a decoder to recover a set of sparse codes from their linear measurements alone.
The contribution of this paper is a computationally efficient decoding algorithm, Decoder-Expander Based Factorisation.
arXiv Detail & Related papers (2020-04-10T16:26:11Z) - Deterministic Decoding for Discrete Data in Variational Autoencoders [5.254093731341154]
We study a VAE model with a deterministic decoder (DD-VAE) for sequential data that selects the highest-scoring tokens instead of sampling.
We demonstrate the performance of DD-VAE on multiple datasets, including molecular generation and optimization problems.
arXiv Detail & Related papers (2020-03-04T16:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.