Adaptive Target-Condition Neural Network: DNN-Aided Load Balancing for
Hybrid LiFi and WiFi Networks
- URL: http://arxiv.org/abs/2208.05035v1
- Date: Tue, 9 Aug 2022 20:46:13 GMT
- Title: Adaptive Target-Condition Neural Network: DNN-Aided Load Balancing for
Hybrid LiFi and WiFi Networks
- Authors: Han Ji, Qiang Wang, Stephen J. Redmond, Iman Tavakkolnia, Xiping Wu
- Abstract summary: Machine learning has the potential to provide a complexity-friendly load balancing solution.
The state-of-the-art (SOTA) learning-aided LB methods need retraining when the network environment changes.
A novel deep neural network (DNN) structure named adaptive target-condition neural network (A-TCNN) is proposed.
- Score: 19.483289519348315
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Load balancing (LB) is a challenging issue in the hybrid light fidelity
(LiFi) and wireless fidelity (WiFi) networks (HLWNets), due to the nature of
heterogeneous access points (APs). Machine learning has the potential to
provide a complexity-friendly LB solution with near-optimal network
performance, at the cost of a training process. The state-of-the-art (SOTA)
learning-aided LB methods, however, need retraining when the network
environment (especially the number of users) changes, significantly limiting
its practicability. In this paper, a novel deep neural network (DNN) structure
named adaptive target-condition neural network (A-TCNN) is proposed, which
conducts AP selection for one target user upon the condition of other users.
Also, an adaptive mechanism is developed to map a smaller number of users to a
larger number through splitting their data rate requirements, without affecting
the AP selection result for the target user. This enables the proposed method
to handle different numbers of users without the need for retraining. Results
show that A-TCNN achieves a network throughput very close to that of the
testing dataset, with a gap less than 3%. It is also proven that A-TCNN can
obtain a network throughput comparable to two SOTA benchmarks, while reducing
the runtime by up to three orders of magnitude.
Related papers
- Learning Load Balancing with GNN in MPTCP-Enabled Heterogeneous Networks [13.178956651532213]
We propose a graph neural network (GNN)-based model to tackle the LB problem for MP TCP-enabled HetNets.
Compared to the conventional deep neural network (DNN), the proposed GNN-based model exhibits two key strengths.
arXiv Detail & Related papers (2024-10-22T15:49:53Z) - Resource and Mobility Management in Hybrid LiFi and WiFi Networks: A User-Centric Learning Approach [10.262324160476586]
Hybrid light fidelity (LiFi) and wireless fidelity (WiFi) networks (HLWNets) are an emerging indoor wireless communication paradigm.
The existing load balancing (LB) methods are mostly network-centric, relying on a central unit to make a solution for the users all at once.
Motivated by this, we investigate user-centric LB which allows users to update their solutions at different paces.
arXiv Detail & Related papers (2024-03-25T14:48:00Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Graph Neural Networks-Based User Pairing in Wireless Communication
Systems [0.34410212782758043]
We propose an unsupervised graph neural network (GNN) approach to efficiently solve the user pairing problem.
At 20 dB SNR, our proposed approach achieves a 49% better sum rate than k-means and a staggering 95% better sum rate than SUS.
arXiv Detail & Related papers (2023-05-14T11:57:42Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - Encoding the latent posterior of Bayesian Neural Networks for
uncertainty quantification [10.727102755903616]
We aim for efficient deep BNNs amenable to complex computer vision architectures.
We achieve this by leveraging variational autoencoders (VAEs) to learn the interaction and the latent distribution of the parameters at each network layer.
Our approach, Latent-Posterior BNN (LP-BNN), is compatible with the recent BatchEnsemble method, leading to highly efficient (in terms of computation and memory during both training and testing) ensembles.
arXiv Detail & Related papers (2020-12-04T19:50:09Z) - Self-Organized Operational Neural Networks with Generative Neurons [87.32169414230822]
ONNs are heterogenous networks with a generalized neuron model that can encapsulate any set of non-linear operators.
We propose Self-organized ONNs (Self-ONNs) with generative neurons that have the ability to adapt (optimize) the nodal operator of each connection.
arXiv Detail & Related papers (2020-04-24T14:37:56Z) - Network Adjustment: Channel Search Guided by FLOPs Utilization Ratio [101.84651388520584]
This paper presents a new framework named network adjustment, which considers network accuracy as a function of FLOPs.
Experiments on standard image classification datasets and a wide range of base networks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-06T15:51:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.