A Bipartite Graph Neural Network Approach for Scalable Beamforming
Optimization
- URL: http://arxiv.org/abs/2207.05364v1
- Date: Tue, 12 Jul 2022 07:59:21 GMT
- Title: A Bipartite Graph Neural Network Approach for Scalable Beamforming
Optimization
- Authors: Junbeom Kim, Hoon Lee, Seung-Eun Hong, Seok-Hwan Park
- Abstract summary: Deep learning (DL) techniques have been intensively studied for the optimization of multi-user single-input single-output (MU-MISO) systems.
This paper develops a framework for beamforming networks with respect to antennas i.e., the number of users.
- Score: 19.747638780327257
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning (DL) techniques have been intensively studied for the
optimization of multi-user multiple-input single-output (MU-MISO) downlink
systems owing to the capability of handling nonconvex formulations. However,
the fixed computation structure of existing deep neural networks (DNNs) lacks
flexibility with respect to the system size, i.e., the number of antennas or
users. This paper develops a bipartite graph neural network (BGNN) framework, a
scalable DL solution designed for multi-antenna beamforming optimization. The
MU-MISO system is first characterized by a bipartite graph where two disjoint
vertex sets, each of which consists of transmit antennas and users, are
connected via pairwise edges. These vertex interconnection states are modeled
by channel fading coefficients. Thus, a generic beamforming optimization
process is interpreted as a computation task over a weight bipartite graph.
This approach partitions the beamforming optimization procedure into multiple
suboperations dedicated to individual antenna vertices and user vertices.
Separated vertex operations lead to scalable beamforming calculations that are
invariant to the system size. The vertex operations are realized by a group of
DNN modules that collectively form the BGNN architecture. Identical DNNs are
reused at all antennas and users so that the resultant learning structure
becomes flexible to the network size. Component DNNs of the BGNN are trained
jointly over numerous MU-MISO configurations with randomly varying network
sizes. As a result, the trained BGNN can be universally applied to arbitrary
MU-MISO systems. Numerical results validate the advantages of the BGNN
framework over conventional methods.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression [55.992528247880685]
Decentralized training faces significant challenges regarding system design and efficiency.
We present FusionLLM, a decentralized training system designed and implemented for training large deep neural networks (DNNs)
We show that our system and method can achieve 1.45 - 9.39x speedup compared to baseline methods while ensuring convergence.
arXiv Detail & Related papers (2024-10-16T16:13:19Z) - Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - MG-GNN: Multigrid Graph Neural Networks for Learning Multilevel Domain
Decomposition Methods [0.0]
We propose multigrid graph neural networks (MG-GNN) for learning optimized parameters in two-level domain decomposition methods.
We show that MG-GNN outperforms popular hierarchical graph network architectures for this optimization.
arXiv Detail & Related papers (2023-01-26T19:44:45Z) - Learning Cooperative Beamforming with Edge-Update Empowered Graph Neural
Networks [29.23937571816269]
We propose an edge-graph-neural-network (Edge-GNN) to learn the cooperative beamforming on the graph edges.
The proposed Edge-GNN achieves higher sum rate with much shorter computation time than state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-23T02:05:06Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - Neural Calibration for Scalable Beamforming in FDD Massive MIMO with
Implicit Channel Estimation [10.775558382613077]
Channel estimation and beamforming play critical roles in frequency-division duplexing (FDD) massive multiple-input multiple-output (MIMO) systems.
We propose a deep learning-based approach that directly optimize the beamformers at the base station according to the received uplink pilots.
A neural calibration method is proposed to improve the scalability of the end-to-end design.
arXiv Detail & Related papers (2021-08-03T14:26:14Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - An Uncoupled Training Architecture for Large Graph Learning [20.784230322205232]
We present Node2Grids, a flexible uncoupled training framework for embedding graph data into grid-like data.
By ranking each node's influence through degree, Node2Grids selects the most influential first-order as well as second-order neighbors with central node fusion information.
For further improving the efficiency of downstream tasks, a simple CNN-based neural network is employed to capture the significant information from the mapped grid-like data.
arXiv Detail & Related papers (2020-03-21T11:49:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.