Learnable Digital Twin for Efficient Wireless Network Evaluation
- URL: http://arxiv.org/abs/2306.06574v1
- Date: Sun, 11 Jun 2023 03:43:39 GMT
- Title: Learnable Digital Twin for Efficient Wireless Network Evaluation
- Authors: Boning Li, Timofey Efimov, Abhishek Kumar, Jose Cortes, Gunjan Verma,
Ananthram Swami, Santiago Segarra
- Abstract summary: Network digital twins (NDTs) facilitate the estimation of key performance indicators (KPIs) before physically implementing a network.
In this paper, we propose a learning-based NDT for network simulators.
- Score: 40.829275623191656
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Network digital twins (NDTs) facilitate the estimation of key performance
indicators (KPIs) before physically implementing a network, thereby enabling
efficient optimization of the network configuration. In this paper, we propose
a learning-based NDT for network simulators. The proposed method offers a
holistic representation of information flow in a wireless network by
integrating node, edge, and path embeddings. Through this approach, the model
is trained to map the network configuration to KPIs in a single forward pass.
Hence, it offers a more efficient alternative to traditional simulation-based
methods, thus allowing for rapid experimentation and optimization. Our proposed
method has been extensively tested through comprehensive experimentation in
various scenarios, including wired and wireless networks. Results show that it
outperforms baseline learning models in terms of accuracy and robustness.
Moreover, our approach achieves comparable performance to simulators but with
significantly higher computational efficiency.
Related papers
- Rapid Network Adaptation: Learning to Adapt Neural Networks Using
Test-Time Feedback [12.946419909506883]
We create a closed-loop system that makes use of a test-time feedback signal to adapt a network on the fly.
We show that this loop can be effectively implemented using a learning-based function, which realizes an amortized for the network.
This leads to an adaptation method, named Rapid Network Adaptation (RNA), that is notably more flexible and orders of magnitude faster than the baselines.
arXiv Detail & Related papers (2023-09-27T16:20:39Z) - Online Network Source Optimization with Graph-Kernel MAB [62.6067511147939]
We propose Grab-UCB, a graph- kernel multi-arms bandit algorithm to learn online the optimal source placement in large scale networks.
We describe the network processes with an adaptive graph dictionary model, which typically leads to sparse spectral representations.
We derive the performance guarantees that depend on network parameters, which further influence the learning curve of the sequential decision strategy.
arXiv Detail & Related papers (2023-07-07T15:03:42Z) - Rewarded meta-pruning: Meta Learning with Rewards for Channel Pruning [19.978542231976636]
This paper proposes a novel method to reduce the parameters and FLOPs for computational efficiency in deep learning models.
We introduce accuracy and efficiency coefficients to control the trade-off between the accuracy of the network and its computing efficiency.
arXiv Detail & Related papers (2023-01-26T12:32:01Z) - Human Activity Recognition from Wi-Fi CSI Data Using Principal
Component-Based Wavelet CNN [3.9533044769534444]
Human Activity Recognition (HAR) is an emerging technology with several applications in surveillance, security, and healthcare sectors.
We propose Principal Component-based Wavelet Convolutional Neural Network (or PCWCNN) -- a novel approach that offers robustness and efficiency for practical real-time applications.
We empirically show that our proposed PCWCNN model performs very well on a real dataset, outperforming existing approaches.
arXiv Detail & Related papers (2022-12-26T13:45:19Z) - Simple initialization and parametrization of sinusoidal networks via
their kernel bandwidth [92.25666446274188]
sinusoidal neural networks with activations have been proposed as an alternative to networks with traditional activation functions.
We first propose a simplified version of such sinusoidal neural networks, which allows both for easier practical implementation and simpler theoretical analysis.
We then analyze the behavior of these networks from the neural tangent kernel perspective and demonstrate that their kernel approximates a low-pass filter with an adjustable bandwidth.
arXiv Detail & Related papers (2022-11-26T07:41:48Z) - CONetV2: Efficient Auto-Channel Size Optimization for CNNs [35.951376988552695]
This work introduces a method that is efficient in computationally constrained environments by examining the micro-search space of channel size.
In tackling channel-size optimization, we design an automated algorithm to extract the dependencies within different connected layers of the network.
We also introduce a novel metric that highly correlates with test accuracy and enables analysis of individual network layers.
arXiv Detail & Related papers (2021-10-13T16:17:19Z) - Learning Robust Beamforming for MISO Downlink Systems [14.429561340880074]
A base station identifies efficient multi-antenna transmission strategies only with imperfect channel state information (CSI) and its features.
We propose a robust training algorithm where a deep neural network (DNN) is optimized to fit to real-world propagation environment.
arXiv Detail & Related papers (2021-03-02T09:56:35Z) - Data-Driven Random Access Optimization in Multi-Cell IoT Networks with
NOMA [78.60275748518589]
Non-orthogonal multiple access (NOMA) is a key technology to enable massive machine type communications (mMTC) in 5G networks and beyond.
In this paper, NOMA is applied to improve the random access efficiency in high-density spatially-distributed multi-cell wireless IoT networks.
A novel formulation of random channel access management is proposed, in which the transmission probability of each IoT device is tuned to maximize the geometric mean of users' expected capacity.
arXiv Detail & Related papers (2021-01-02T15:21:08Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z) - Large Batch Training Does Not Need Warmup [111.07680619360528]
Training deep neural networks using a large batch size has shown promising results and benefits many real-world applications.
In this paper, we propose a novel Complete Layer-wise Adaptive Rate Scaling (CLARS) algorithm for large-batch training.
Based on our analysis, we bridge the gap and illustrate the theoretical insights for three popular large-batch training techniques.
arXiv Detail & Related papers (2020-02-04T23:03:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.