Toward Energy-Efficient Massive MIMO: Graph Neural Network Precoding for
Mitigating Non-Linear PA Distortion
- URL: http://arxiv.org/abs/2312.04591v1
- Date: Tue, 5 Dec 2023 13:25:35 GMT
- Title: Toward Energy-Efficient Massive MIMO: Graph Neural Network Precoding for
Mitigating Non-Linear PA Distortion
- Authors: Thomas Feys, Liesbet Van der Perre, Fran\c{c}ois Rottenberg
- Abstract summary: A graph neural network (GNN) learns a mapping between channel and precoding matrices, which maximizes the sum rate affected by non-linear distortion.
In the distortion-limited regime, this GNN-based precoder outperforms zero forcing (ZF), ZF plus digital pre-distortion (DPD) and distortion-aware beamforming (DAB) precoders.
- Score: 1.7495213911983414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Massive MIMO systems are typically designed assuming linear power amplifiers
(PAs). However, PAs are most energy efficient close to saturation, where
non-linear distortion arises. For conventional precoders, this distortion can
coherently combine at user locations, limiting performance. We propose a graph
neural network (GNN) to learn a mapping between channel and precoding matrices,
which maximizes the sum rate affected by non-linear distortion, using a
high-order polynomial PA model. In the distortion-limited regime, this
GNN-based precoder outperforms zero forcing (ZF), ZF plus digital
pre-distortion (DPD) and the distortion-aware beamforming (DAB) precoder from
the state-of-the-art. At an input back-off of -3 dB the proposed precoder
compared to ZF increases the sum rate by 8.60 and 8.84 bits/channel use for two
and four users respectively. Radiation patterns show that these gains are
achieved by transmitting the non-linear distortion in non-user directions. In
the four user-case, for a fixed sum rate, the total consumed power (PA and
processing) of the GNN precoder is 3.24 and 1.44 times lower compared to ZF and
ZF plus DPD respectively. A complexity analysis shows six orders of magnitude
reduction compared to DAB precoding. This opens perspectives to operate PAs
closer to saturation, which drastically increases their energy efficiency.
Related papers
- MP-DPD: Low-Complexity Mixed-Precision Neural Networks for Energy-Efficient Digital Predistortion of Wideband Power Amplifiers [8.58564278168083]
Digital Pre-Distortion (DPD) enhances signal quality in wideband RF power amplifiers (PAs)
This paper introduces open-source mixed-precision (MP) neural networks that employ quantized low-precision fixed-point parameters for energy-efficient DPD.
arXiv Detail & Related papers (2024-04-18T21:04:39Z) - GES: Generalized Exponential Splatting for Efficient Radiance Field Rendering [112.16239342037714]
GES (Generalized Exponential Splatting) is a novel representation that employs Generalized Exponential Function (GEF) to model 3D scenes.
With the aid of a frequency-modulated loss, GES achieves competitive performance in novel-view synthesis benchmarks.
arXiv Detail & Related papers (2024-02-15T17:32:50Z) - Josephson parametric amplifier with Chebyshev gain profile and high
saturation [0.0]
We demonstrate a Josephson parametric amplifier design with a band-pass impedance matching network based on a third-order Chebyshev prototype.
We measure eight amplifiers operating at 4.6 GHz that exhibit gains of 20 dB with less than 1 dB gain ripple and up to 500 MHz bandwidth.
We characterize the system readout efficiency and its signal-to-noise ratio near saturation using a Sycamore processor.
arXiv Detail & Related papers (2023-05-28T22:04:08Z) - Self-Supervised Learning of Linear Precoders under Non-Linear PA
Distortion for Energy-Efficient Massive MIMO Systems [9.324642081509756]
A massive multiple input multiple output (MIMO) system is typically designed under the assumption of linear power amplifiers (PAs)
However, PAs are typically most energy-efficient when operating close to their saturation point, where they cause non-linear distortion.
In this work, we propose the use of a neural network (NN) to learn the mapping between the channel matrix and the precoding matrix, which maximizes the presence of this non-linear distortion.
arXiv Detail & Related papers (2022-10-13T13:48:50Z) - Numerical analysis of a three-wave-mixing Josephson traveling-wave
parametric amplifier with engineered dispersion loadings [62.997667081978825]
Recently proposed Josephson traveling-wave parametric amplifier has great potential in achieving a gain of 20 dB and a flat bandwidth of at least 4 GHz.
We model the advanced JTWPA circuit with periodic modulation of the circuit parameters.
engineered dispersion loadings allow achieving sufficiently wide $3$ dB-bandwidth from $3$ GHz to $9$ GHz combined with a reasonably small ripple.
arXiv Detail & Related papers (2022-09-22T14:46:04Z) - Readout of a quantum processor with high dynamic range Josephson
parametric amplifiers [132.67289832617647]
Device is matched to the 50 $Omega$ environment with a bandwidth of 250-300 MHz, with input saturation powers up to -95 dBm at 20 dB gain.
A 54-qubit Sycamore processor was used to benchmark these devices.
Design has no adverse effect on system noise, readout fidelity, or qubit dephasing.
arXiv Detail & Related papers (2022-09-16T07:34:05Z) - Inception Transformer [151.939077819196]
Inception Transformer, or iFormer, learns comprehensive features with both high- and low-frequency information in visual data.
We benchmark the iFormer on a series of vision tasks, and showcase that it achieves impressive performance on image classification, COCO detection and ADE20K segmentation.
arXiv Detail & Related papers (2022-05-25T17:59:54Z) - Efficient Autoprecoder-based deep learning for massive MU-MIMO Downlink
under PA Non-Linearities [0.0]
We present AP-mMIMO, a new method that jointly eliminates the multiuser interference and compensates the severe nonlinear (NL) PA distortions.
Unlike previous works, AP-mMIMO has a low computational complexity, making it suitable for a global energy-efficient system.
arXiv Detail & Related papers (2022-02-03T08:53:52Z) - Adaptive Fourier Neural Operators: Efficient Token Mixers for
Transformers [55.90468016961356]
We propose an efficient token mixer that learns to mix in the Fourier domain.
AFNO is based on a principled foundation of operator learning.
It can handle a sequence size of 65k and outperforms other efficient self-attention mechanisms.
arXiv Detail & Related papers (2021-11-24T05:44:31Z) - Quantum Gates Robust to Secular Amplitude Drifts [0.0]
We show that composite pulses that suppress all power-law drifts with $p leq n$ are also high-pass filters of filter order $n+1$ arXiv:1410.1624.
We find that there is a range of noise frequencies for which the $textPLA(n)$ sequences provide more error suppression than the traditional sequences.
arXiv Detail & Related papers (2021-08-10T14:44:05Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.