Graph Neural Networks-Based User Pairing in Wireless Communication
Systems
- URL: http://arxiv.org/abs/2306.00717v1
- Date: Sun, 14 May 2023 11:57:42 GMT
- Title: Graph Neural Networks-Based User Pairing in Wireless Communication
Systems
- Authors: Sharan Mourya, Pavan Reddy, SaiDhiraj Amuru, Kiran Kumar Kuchi
- Abstract summary: We propose an unsupervised graph neural network (GNN) approach to efficiently solve the user pairing problem.
At 20 dB SNR, our proposed approach achieves a 49% better sum rate than k-means and a staggering 95% better sum rate than SUS.
- Score: 0.34410212782758043
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, deep neural networks have emerged as a solution to solve NP-hard
wireless resource allocation problems in real-time. However, multi-layer
perceptron (MLP) and convolutional neural network (CNN) structures, which are
inherited from image processing tasks, are not optimized for wireless network
problems. As network size increases, these methods get harder to train and
generalize. User pairing is one such essential NP-hard optimization problem in
wireless communication systems that entails selecting users to be scheduled
together while minimizing interference and maximizing throughput. In this
paper, we propose an unsupervised graph neural network (GNN) approach to
efficiently solve the user pairing problem. Our proposed method utilizes the
Erdos goes neural pipeline to significantly outperform other scheduling methods
such as k-means and semi-orthogonal user scheduling (SUS). At 20 dB SNR, our
proposed approach achieves a 49% better sum rate than k-means and a staggering
95% better sum rate than SUS while consuming minimal time and resources. The
scalability of the proposed method is also explored as our model can handle
dynamic changes in network size without experiencing a substantial decrease in
performance. Moreover, our model can accomplish this without being explicitly
trained for larger or smaller networks facilitating a dynamic functionality
that cannot be achieved using CNNs or MLPs.
Related papers
- Deploying Graph Neural Networks in Wireless Networks: A Link Stability Viewpoint [13.686715722390149]
Graph neural networks (GNNs) have exhibited promising performance across a wide range of graph applications.
In wireless systems communication among nodes are usually due to wireless fading and receiver noise consequently in degradation of GNNs.
arXiv Detail & Related papers (2024-05-09T14:37:08Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Graph Neural Networks for Power Allocation in Wireless Networks with
Full Duplex Nodes [10.150768420975155]
Due to mutual interference between users, power allocation problems in wireless networks are often non-trivial.
Graph Graph neural networks (GNNs) have recently emerged as a promising approach tackling these problems and an approach exploits underlying topology of wireless networks.
arXiv Detail & Related papers (2023-03-27T10:59:09Z) - Learning k-Level Structured Sparse Neural Networks Using Group Envelope Regularization [4.0554893636822]
We introduce a novel approach to deploy large-scale Deep Neural Networks on constrained resources.
The method speeds up inference time and aims to reduce memory demand and power consumption.
arXiv Detail & Related papers (2022-12-25T15:40:05Z) - Learning Cooperative Beamforming with Edge-Update Empowered Graph Neural
Networks [29.23937571816269]
We propose an edge-graph-neural-network (Edge-GNN) to learn the cooperative beamforming on the graph edges.
The proposed Edge-GNN achieves higher sum rate with much shorter computation time than state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-23T02:05:06Z) - Neural network relief: a pruning algorithm based on neural activity [47.57448823030151]
We propose a simple importance-score metric that deactivates unimportant connections.
We achieve comparable performance for LeNet architectures on MNIST.
The algorithm is not designed to minimize FLOPs when considering current hardware and software implementations.
arXiv Detail & Related papers (2021-09-22T15:33:49Z) - Learning from Images: Proactive Caching with Parallel Convolutional
Neural Networks [94.85780721466816]
A novel framework for proactive caching is proposed in this paper.
It combines model-based optimization with data-driven techniques by transforming an optimization problem into a grayscale image.
Numerical results show that the proposed scheme can reduce 71.6% computation time with only 0.8% additional performance cost.
arXiv Detail & Related papers (2021-08-15T21:32:47Z) - Learning Autonomy in Management of Wireless Random Networks [102.02142856863563]
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes.
We develop a flexible deep neural network formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology.
arXiv Detail & Related papers (2021-06-15T09:03:28Z) - Resource Allocation via Graph Neural Networks in Free Space Optical
Fronthaul Networks [119.81868223344173]
This paper investigates the optimal resource allocation in free space optical (FSO) fronthaul networks.
We consider the graph neural network (GNN) for the policy parameterization to exploit the FSO network structure.
The primal-dual learning algorithm is developed to train the GNN in a model-free manner, where the knowledge of system models is not required.
arXiv Detail & Related papers (2020-06-26T14:20:48Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.