Self-Supervised Learning of Linear Precoders under Non-Linear PA
Distortion for Energy-Efficient Massive MIMO Systems
- URL: http://arxiv.org/abs/2210.07037v1
- Date: Thu, 13 Oct 2022 13:48:50 GMT
- Title: Self-Supervised Learning of Linear Precoders under Non-Linear PA
Distortion for Energy-Efficient Massive MIMO Systems
- Authors: Thomas Feys, Xavier Mestre, Fran\c{c}ois Rottenberg
- Abstract summary: A massive multiple input multiple output (MIMO) system is typically designed under the assumption of linear power amplifiers (PAs)
However, PAs are typically most energy-efficient when operating close to their saturation point, where they cause non-linear distortion.
In this work, we propose the use of a neural network (NN) to learn the mapping between the channel matrix and the precoding matrix, which maximizes the presence of this non-linear distortion.
- Score: 9.324642081509756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Massive multiple input multiple output (MIMO) systems are typically designed
under the assumption of linear power amplifiers (PAs). However, PAs are
typically most energy-efficient when operating close to their saturation point,
where they cause non-linear distortion. Moreover, when using conventional
precoders, this distortion coherently combines at the user locations, limiting
performance. As such, when designing an energy-efficient massive MIMO system,
this distortion has to be managed. In this work, we propose the use of a neural
network (NN) to learn the mapping between the channel matrix and the precoding
matrix, which maximizes the sum rate in the presence of this non-linear
distortion. This is done for a third-order polynomial PA model for both the
single and multi-user case. By learning this mapping a significant increase in
energy efficiency is achieved as compared to conventional precoders and even as
compared to perfect digital pre-distortion (DPD), in the saturation regime.
Related papers
- Bellman Diffusion: Generative Modeling as Learning a Linear Operator in the Distribution Space [72.52365911990935]
We introduce Bellman Diffusion, a novel DGM framework that maintains linearity in MDPs through gradient and scalar field modeling.
Our results show that Bellman Diffusion achieves accurate field estimations and is a capable image generator, converging 1.5x faster than the traditional histogram-based baseline in distributional RL tasks.
arXiv Detail & Related papers (2024-10-02T17:53:23Z) - Intrinsic Voltage Offsets in Memcapacitive Bio-Membranes Enable High-Performance Physical Reservoir Computing [0.0]
Reservoir computing is a brain-inspired machine learning framework for processing temporal data by mapping inputs into high-dimensional spaces.
Here, we introduce a novel memcapacitor-based PRC that exploits internal voltage offsets to enable both monotonic and non-monotonic input-state correlations.
Our approach and unprecedented performance are a major milestone towards high-performance full in-materia PRCs.
arXiv Detail & Related papers (2024-04-27T05:47:38Z) - Function Approximation for Reinforcement Learning Controller for Energy from Spread Waves [69.9104427437916]
Multi-generator Wave Energy Converters (WEC) must handle multiple simultaneous waves coming from different directions called spread waves.
These complex devices need controllers with multiple objectives of energy capture efficiency, reduction of structural stress to limit maintenance, and proactive protection against high waves.
In this paper, we explore different function approximations for the policy and critic networks in modeling the sequential nature of the system dynamics.
arXiv Detail & Related papers (2024-04-17T02:04:10Z) - Toward Energy-Efficient Massive MIMO: Graph Neural Network Precoding for
Mitigating Non-Linear PA Distortion [1.7495213911983414]
A graph neural network (GNN) learns a mapping between channel and precoding matrices, which maximizes the sum rate affected by non-linear distortion.
In the distortion-limited regime, this GNN-based precoder outperforms zero forcing (ZF), ZF plus digital pre-distortion (DPD) and distortion-aware beamforming (DAB) precoders.
arXiv Detail & Related papers (2023-12-05T13:25:35Z) - One-Dimensional Deep Image Prior for Curve Fitting of S-Parameters from
Electromagnetic Solvers [57.441926088870325]
Deep Image Prior (DIP) is a technique that optimized the weights of a randomly-d convolutional neural network to fit a signal from noisy or under-determined measurements.
Relative to publicly available implementations of Vector Fitting (VF), our method shows superior performance on nearly all test examples.
arXiv Detail & Related papers (2023-06-06T20:28:37Z) - DBA: Efficient Transformer with Dynamic Bilinear Low-Rank Attention [53.02648818164273]
We present an efficient yet effective attention mechanism, namely the Dynamic Bilinear Low-Rank Attention (DBA)
DBA compresses the sequence length by input-sensitive dynamic projection matrices and achieves linear time and space complexity.
Experiments over tasks with diverse sequence length conditions show that DBA achieves state-of-the-art performance.
arXiv Detail & Related papers (2022-11-24T03:06:36Z) - Efficient Autoprecoder-based deep learning for massive MU-MIMO Downlink
under PA Non-Linearities [0.0]
We present AP-mMIMO, a new method that jointly eliminates the multiuser interference and compensates the severe nonlinear (NL) PA distortions.
Unlike previous works, AP-mMIMO has a low computational complexity, making it suitable for a global energy-efficient system.
arXiv Detail & Related papers (2022-02-03T08:53:52Z) - Adaptive Fourier Neural Operators: Efficient Token Mixers for
Transformers [55.90468016961356]
We propose an efficient token mixer that learns to mix in the Fourier domain.
AFNO is based on a principled foundation of operator learning.
It can handle a sequence size of 65k and outperforms other efficient self-attention mechanisms.
arXiv Detail & Related papers (2021-11-24T05:44:31Z) - Learning OFDM Waveforms with PAPR and ACLR Constraints [15.423422040627331]
We propose a learning-based method to design OFDM-based waveforms that satisfy selected constraints while maximizing an achievable information rate.
We show that the end-to-end system is able to satisfy target PAPR and ACLR constraints and allows significant throughput gains.
arXiv Detail & Related papers (2021-10-21T08:58:59Z) - End-to-End Learning of OFDM Waveforms with PAPR and ACLR Constraints [15.423422040627331]
We propose to use a neural network (NN) at the transmitter to learn a high-dimensional modulation scheme allowing to control the PAPR and adjacent channel leakage ratio (ACLR)
The two NNs operate on top of OFDM, and are jointly optimized in and end-to-end manner using a training algorithm that enforces constraints on the PAPR and ACLR.
arXiv Detail & Related papers (2021-06-30T13:09:30Z) - Massive MIMO As an Extreme Learning Machine [83.12538841141892]
A massive multiple-input multiple-output (MIMO) system with low-resolution analog-to-digital converters (ADCs) forms a natural extreme learning machine (ELM)
By adding random biases to the received signals and optimizing the ELM output weights, the system can effectively tackle hardware impairments.
arXiv Detail & Related papers (2020-07-01T04:15:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.