Machine Learning Methods for Spectral Efficiency Prediction in Massive
MIMO Systems
- URL: http://arxiv.org/abs/2112.14423v1
- Date: Wed, 29 Dec 2021 07:03:10 GMT
- Title: Machine Learning Methods for Spectral Efficiency Prediction in Massive
MIMO Systems
- Authors: Evgeny Bobrov (1, 3), Sergey Troshin (2), Nadezhda Chirkova (2),
Ekaterina Lobacheva (2), Sviatoslav Panchenko (3, 5), Dmitry Vetrov (2, 4),
Dmitry Kropotov (1, 2) ((1) Lomonosov MSU, Russia, (2) HSE University,
Russia, (3) MRC, Huawei Technologies, Russia, (4) AIRI, Russia, (5) MIPT,
Russia)
- Abstract summary: We study several machine learning approaches to solve the problem of estimating the spectral efficiency (SE) value for a certain precoding scheme, preferably in the shortest possible time.
The best results in terms of mean average percentage error (MAPE) are obtained with gradient boosting over sorted features, while linear models demonstrate worse prediction quality.
We investigate the practical applicability of the proposed algorithms in a wide range of scenarios generated by the Quadriga simulator.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Channel decoding, channel detection, channel assessment, and resource
management for wireless multiple-input multiple-output (MIMO) systems are all
examples of problems where machine learning (ML) can be successfully applied.
In this paper, we study several ML approaches to solve the problem of
estimating the spectral efficiency (SE) value for a certain precoding scheme,
preferably in the shortest possible time. The best results in terms of mean
average percentage error (MAPE) are obtained with gradient boosting over sorted
features, while linear models demonstrate worse prediction quality. Neural
networks perform similarly to gradient boosting, but they are more resource-
and time-consuming because of hyperparameter tuning and frequent retraining. We
investigate the practical applicability of the proposed algorithms in a wide
range of scenarios generated by the Quadriga simulator. In almost all
scenarios, the MAPE achieved using gradient boosting and neural networks is
less than 10\%.
Related papers
- Advancing Machine Learning in Industry 4.0: Benchmark Framework for Rare-event Prediction in Chemical Processes [0.0]
We introduce a novel and comprehensive benchmark framework for rare-event prediction, comparing ML algorithms of varying complexity.
We identify optimal ML strategies for predicting abnormal rare events, enabling operators to obtain safer and more reliable plant operations.
arXiv Detail & Related papers (2024-08-31T15:41:10Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Scaling Forward Gradient With Local Losses [117.22685584919756]
Forward learning is a biologically plausible alternative to backprop for learning deep neural networks.
We show that it is possible to substantially reduce the variance of the forward gradient by applying perturbations to activations rather than weights.
Our approach matches backprop on MNIST and CIFAR-10 and significantly outperforms previously proposed backprop-free algorithms on ImageNet.
arXiv Detail & Related papers (2022-10-07T03:52:27Z) - Mixed Precision Low-bit Quantization of Neural Network Language Models
for Speech Recognition [67.95996816744251]
State-of-the-art language models (LMs) represented by long-short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming increasingly complex and expensive for practical applications.
Current quantization methods are based on uniform precision and fail to account for the varying performance sensitivity at different parts of LMs to quantization errors.
Novel mixed precision neural network LM quantization methods are proposed in this paper.
arXiv Detail & Related papers (2021-11-29T12:24:02Z) - Machine Learning-enhanced Receive Processing for MU-MIMO OFDM Systems [15.423422040627331]
Machine learning can be used to improve multi-user multiple-input multiple-output (MU-MIMO) receive processing.
We propose a new strategy which preserves the benefits of a conventional receiver, but enhances specific parts with ML components.
arXiv Detail & Related papers (2021-06-30T14:02:27Z) - Behavioral Model Inference of Black-box Software using Deep Neural
Networks [1.6593369275241105]
Many software engineering tasks, such as testing, and anomaly detection can benefit from the ability to infer a behavioral model of the software.
Most existing inference approaches assume access to code to collect execution sequences.
We show how this approach can be used to accurately detect state changes, and how the inferred models can be successfully applied to transfer-learning scenarios.
arXiv Detail & Related papers (2021-01-13T09:23:37Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Machine Learning for MU-MIMO Receive Processing in OFDM Systems [14.118477167150143]
We propose an ML-enhanced MU-MIMO receiver that builds on top of a conventional linear minimum mean squared error (LMMSE) architecture.
CNNs are used to compute an approximation of the second-order statistics of the channel estimation error.
A CNN-based demapper jointly processes a large number of frequency-division multiplexing symbols and subcarriers.
arXiv Detail & Related papers (2020-12-15T09:55:37Z) - Solving Sparse Linear Inverse Problems in Communication Systems: A Deep
Learning Approach With Adaptive Depth [51.40441097625201]
We propose an end-to-end trainable deep learning architecture for sparse signal recovery problems.
The proposed method learns how many layers to execute to emit an output, and the network depth is dynamically adjusted for each task in the inference phase.
arXiv Detail & Related papers (2020-10-29T06:32:53Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.