Interference-Limited Ultra-Reliable and Low-Latency Communications:
Graph Neural Networks or Stochastic Geometry?
- URL: http://arxiv.org/abs/2207.06918v1
- Date: Mon, 11 Jul 2022 05:49:41 GMT
- Title: Interference-Limited Ultra-Reliable and Low-Latency Communications:
Graph Neural Networks or Stochastic Geometry?
- Authors: Yuhong Liu, Changyang She, Yi Zhong, Wibowo Hardjawana, Fu-Chun Zheng,
and Branka Vucetic
- Abstract summary: We build a cascaded Random Edge Graph Neural Network (REGNN) to represent the repetition scheme and train it.
We analyze the violation probability using geometry in a symmetric scenario and apply a model-based Exhaustive Search (ES) method to find the optimal solution.
- Score: 45.776476161876204
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we aim to improve the Quality-of-Service (QoS) of
Ultra-Reliability and Low-Latency Communications (URLLC) in
interference-limited wireless networks. To obtain time diversity within the
channel coherence time, we first put forward a random repetition scheme that
randomizes the interference power. Then, we optimize the number of reserved
slots and the number of repetitions for each packet to minimize the QoS
violation probability, defined as the percentage of users that cannot achieve
URLLC. We build a cascaded Random Edge Graph Neural Network (REGNN) to
represent the repetition scheme and develop a model-free unsupervised learning
method to train it. We analyze the QoS violation probability using stochastic
geometry in a symmetric scenario and apply a model-based Exhaustive Search (ES)
method to find the optimal solution. Simulation results show that in the
symmetric scenario, the QoS violation probabilities achieved by the model-free
learning method and the model-based ES method are nearly the same. In more
general scenarios, the cascaded REGNN generalizes very well in wireless
networks with different scales, network topologies, cell densities, and
frequency reuse factors. It outperforms the model-based ES method in the
presence of the model mismatch.
Related papers
- Approximation with Random Shallow ReLU Networks with Applications to Model Reference Adaptive Control [0.0]
We show that ReLU networks with randomly generated weights and biases achieve $L_infty$ error of $O(m-1/2)$ with high probability.
We show how the result can be used to get approximations of required accuracy in a model reference adaptive control application.
arXiv Detail & Related papers (2024-03-25T19:39:17Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - Likelihood-Free Inference with Generative Neural Networks via Scoring
Rule Minimization [0.0]
Inference methods yield posterior approximations for simulator models with intractable likelihood.
Many works trained neural networks to approximate either the intractable likelihood or the posterior directly.
Here, we propose to approximate the posterior with generative networks trained by Scoring Rule minimization.
arXiv Detail & Related papers (2022-05-31T13:32:55Z) - Verification of Neural-Network Control Systems by Integrating Taylor
Models and Zonotopes [0.0]
We study the verification problem for closed-loop dynamical systems with neural-network controllers (NNCS)
We present an algorithm to chain approaches based on Taylor models and zonotopes, yielding a precise reachability algorithm for NNCS.
arXiv Detail & Related papers (2021-12-16T20:46:39Z) - Revisit Geophysical Imaging in A New View of Physics-informed Generative
Adversarial Learning [2.12121796606941]
Full waveform inversion produces high-resolution subsurface models.
FWI with least-squares function suffers from many drawbacks such as the local-minima problem.
Recent works relying on partial differential equations and neural networks show promising performance for two-dimensional FWI.
We propose an unsupervised learning paradigm that integrates wave equation with a discriminate network to accurately estimate the physically consistent models.
arXiv Detail & Related papers (2021-09-23T15:54:40Z) - Rate Distortion Characteristic Modeling for Neural Image Compression [59.25700168404325]
End-to-end optimization capability offers neural image compression (NIC) superior lossy compression performance.
distinct models are required to be trained to reach different points in the rate-distortion (R-D) space.
We make efforts to formulate the essential mathematical functions to describe the R-D behavior of NIC using deep network and statistical modeling.
arXiv Detail & Related papers (2021-06-24T12:23:05Z) - A Distributed Optimisation Framework Combining Natural Gradient with
Hessian-Free for Discriminative Sequence Training [16.83036203524611]
This paper presents a novel natural gradient and Hessian-free (NGHF) optimisation framework for neural network training.
It relies on the linear conjugate gradient (CG) algorithm to combine the natural gradient (NG) method with local curvature information from Hessian-free (HF) or other second-order methods.
Experiments are reported on the multi-genre broadcast data set for a range of different acoustic model types.
arXiv Detail & Related papers (2021-03-12T22:18:34Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Model Fusion via Optimal Transport [64.13185244219353]
We present a layer-wise model fusion algorithm for neural networks.
We show that this can successfully yield "one-shot" knowledge transfer between neural networks trained on heterogeneous non-i.i.d. data.
arXiv Detail & Related papers (2019-10-12T22:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.