Learning Power Control for Cellular Systems with Heterogeneous Graph
Neural Network
- URL: http://arxiv.org/abs/2011.03164v1
- Date: Fri, 6 Nov 2020 02:41:38 GMT
- Title: Learning Power Control for Cellular Systems with Heterogeneous Graph
Neural Network
- Authors: Jia Guo and Chenyang Yang
- Abstract summary: We show that the power control policy has a combination of different PI and PE properties, and existing HetGNN does not satisfy these properties.
We design a parameter sharing scheme for HetGNN such that the learned relationship satisfies the desired properties.
- Score: 37.060397377445504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optimizing power control in multi-cell cellular networks with deep learning
enables such a non-convex problem to be implemented in real-time. When channels
are time-varying, the deep neural networks (DNNs) need to be re-trained
frequently, which calls for low training complexity. To reduce the number of
training samples and the size of DNN required to achieve good performance, a
promising approach is to embed the DNNs with priori knowledge. Since cellular
networks can be modelled as a graph, it is natural to employ graph neural
networks (GNNs) for learning, which exhibit permutation invariance (PI) and
equivalence (PE) properties. Unlike the homogeneous GNNs that have been used
for wireless problems, whose outputs are invariant or equivalent to arbitrary
permutations of vertexes, heterogeneous GNNs (HetGNNs), which are more
appropriate to model cellular networks, are only invariant or equivalent to
some permutations. If the PI or PE properties of the HetGNN do not match the
property of the task to be learned, the performance degrades dramatically. In
this paper, we show that the power control policy has a combination of
different PI and PE properties, and existing HetGNN does not satisfy these
properties. We then design a parameter sharing scheme for HetGNN such that the
learned relationship satisfies the desired properties. Simulation results show
that the sample complexity and the size of designed GNN for learning the
optimal power control policy in multi-user multi-cell networks are much lower
than the existing DNNs, when achieving the same sum rate loss from the
numerically obtained solutions.
Related papers
- Learning Load Balancing with GNN in MPTCP-Enabled Heterogeneous Networks [13.178956651532213]
We propose a graph neural network (GNN)-based model to tackle the LB problem for MP TCP-enabled HetNets.
Compared to the conventional deep neural network (DNN), the proposed GNN-based model exhibits two key strengths.
arXiv Detail & Related papers (2024-10-22T15:49:53Z) - GPT-PINN: Generative Pre-Trained Physics-Informed Neural Networks toward
non-intrusive Meta-learning of parametric PDEs [0.0]
We propose the Generative Pre-Trained PINN (GPT-PINN) to mitigate both challenges in the setting of parametric PDEs.
As a network of networks, its outer-/meta-network is hyper-reduced with only one hidden layer having significantly reduced number of neurons.
The meta-network adaptively learns'' the parametric dependence of the system and grows'' this hidden layer one neuron at a time.
arXiv Detail & Related papers (2023-03-27T02:22:09Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - A Model-based GNN for Learning Precoding [37.060397377445504]
Learning precoding policies with neural networks enables low complexity online implementation, robustness to channel impairments, and joint optimization with channel acquisition.
Existing neural networks suffer from high training complexity and poor generalization ability when they are used to learn to optimize precoding for mitigating multi-user interference.
We propose a graph neural network (GNN) to learn precoding policies by harnessing both the mathematical model and the property of the policies.
arXiv Detail & Related papers (2022-12-01T20:40:38Z) - Relation Embedding based Graph Neural Networks for Handling
Heterogeneous Graph [58.99478502486377]
We propose a simple yet efficient framework to make the homogeneous GNNs have adequate ability to handle heterogeneous graphs.
Specifically, we propose Relation Embedding based Graph Neural Networks (RE-GNNs), which employ only one parameter per relation to embed the importance of edge type relations and self-loop connections.
arXiv Detail & Related papers (2022-09-23T05:24:18Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - Resource Allocation via Graph Neural Networks in Free Space Optical
Fronthaul Networks [119.81868223344173]
This paper investigates the optimal resource allocation in free space optical (FSO) fronthaul networks.
We consider the graph neural network (GNN) for the policy parameterization to exploit the FSO network structure.
The primal-dual learning algorithm is developed to train the GNN in a model-free manner, where the knowledge of system models is not required.
arXiv Detail & Related papers (2020-06-26T14:20:48Z) - Stochastic Graph Neural Networks [123.39024384275054]
Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
arXiv Detail & Related papers (2020-06-04T08:00:00Z) - Self-Organized Operational Neural Networks with Generative Neurons [87.32169414230822]
ONNs are heterogenous networks with a generalized neuron model that can encapsulate any set of non-linear operators.
We propose Self-organized ONNs (Self-ONNs) with generative neurons that have the ability to adapt (optimize) the nodal operator of each connection.
arXiv Detail & Related papers (2020-04-24T14:37:56Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Constructing Deep Neural Networks with a Priori Knowledge of Wireless
Tasks [37.060397377445504]
Two kinds of permutation invariant properties widely existed in wireless tasks can be harnessed to reduce the number of model parameters.
We find special architecture of DNNs whose input-output relationships satisfy the properties, called permutation invariant DNN (PINN)
We take predictive resource allocation and interference coordination as examples to show how the PINNs can be employed for learning the optimal policy with unsupervised and supervised learning.
arXiv Detail & Related papers (2020-01-29T08:54:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.