A Model-based GNN for Learning Precoding
- URL: http://arxiv.org/abs/2212.00860v1
- Date: Thu, 1 Dec 2022 20:40:38 GMT
- Title: A Model-based GNN for Learning Precoding
- Authors: Jia Guo and Chenyang Yang
- Abstract summary: Learning precoding policies with neural networks enables low complexity online implementation, robustness to channel impairments, and joint optimization with channel acquisition.
Existing neural networks suffer from high training complexity and poor generalization ability when they are used to learn to optimize precoding for mitigating multi-user interference.
We propose a graph neural network (GNN) to learn precoding policies by harnessing both the mathematical model and the property of the policies.
- Score: 37.060397377445504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning precoding policies with neural networks enables low complexity
online implementation, robustness to channel impairments, and joint
optimization with channel acquisition. However, existing neural networks suffer
from high training complexity and poor generalization ability when they are
used to learn to optimize precoding for mitigating multi-user interference.
This impedes their use in practical systems where the number of users is
time-varying. In this paper, we propose a graph neural network (GNN) to learn
precoding policies by harnessing both the mathematical model and the property
of the policies. We first show that a vanilla GNN cannot well-learn
pseudo-inverse of channel matrix when the numbers of antennas and users are
large, and is not generalizable to unseen numbers of users. Then, we design a
GNN by resorting to the Taylor's expansion of matrix pseudo-inverse, which
allows for capturing the importance of the neighbored edges to be aggregated
that is crucial for learning precoding policies efficiently. Simulation results
show that the proposed GNN can well learn spectral efficient and energy
efficient precoding policies in single- and multi-cell multi-user multi-antenna
systems with low training complexity, and can be well generalized to the
numbers of users.
Related papers
- The Power of Linear Combinations: Learning with Random Convolutions [2.0305676256390934]
Modern CNNs can achieve high test accuracies without ever updating randomly (spatial) convolution filters.
These combinations of random filters can implicitly regularize the resulting operations.
Although we only observe relatively small gains from learning $3times 3$ convolutions, the learning gains increase proportionally with kernel size.
arXiv Detail & Related papers (2023-01-26T19:17:10Z) - Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - DNN Training Acceleration via Exploring GPGPU Friendly Sparsity [16.406482603838157]
We propose the Approximate Random Dropout that replaces the conventional random dropout of neurons and synapses with a regular and online generated row-based or tile-based dropout patterns.
We then develop a SGD-based Search Algorithm that produces the distribution of row-based or tile-based dropout patterns to compensate for the potential accuracy loss.
We also propose the sensitivity-aware dropout method to dynamically drop the input feature maps based on their sensitivity so as to achieve greater forward and backward training acceleration.
arXiv Detail & Related papers (2022-03-11T01:32:03Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Learning Power Control for Cellular Systems with Heterogeneous Graph
Neural Network [37.060397377445504]
We show that the power control policy has a combination of different PI and PE properties, and existing HetGNN does not satisfy these properties.
We design a parameter sharing scheme for HetGNN such that the learned relationship satisfies the desired properties.
arXiv Detail & Related papers (2020-11-06T02:41:38Z) - Constructing Deep Neural Networks with a Priori Knowledge of Wireless
Tasks [37.060397377445504]
Two kinds of permutation invariant properties widely existed in wireless tasks can be harnessed to reduce the number of model parameters.
We find special architecture of DNNs whose input-output relationships satisfy the properties, called permutation invariant DNN (PINN)
We take predictive resource allocation and interference coordination as examples to show how the PINNs can be employed for learning the optimal policy with unsupervised and supervised learning.
arXiv Detail & Related papers (2020-01-29T08:54:42Z) - Lossless Compression of Deep Neural Networks [17.753357839478575]
Deep neural networks have been successful in many predictive modeling tasks, such as image and language recognition.
It is challenging to deploy these networks under limited computational resources, such as in mobile devices.
We introduce an algorithm that removes units and layers of a neural network while not changing the output that is produced.
arXiv Detail & Related papers (2020-01-01T15:04:43Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.