Distributed Learning for Wi-Fi AP Load Prediction
- URL: http://arxiv.org/abs/2405.05140v1
- Date: Mon, 22 Apr 2024 16:37:35 GMT
- Title: Distributed Learning for Wi-Fi AP Load Prediction
- Authors: Dariush Salami, Francesc Wilhelmi, Lorenzo Galati-Giordano, Mika Kasslin,
- Abstract summary: We study the application of the two cornerstones of distributed learning, namely Federated Learning (FL) and Knowledge Distillation (KD)
We prove that distributed learning can improve the predictive accuracy centralized ML solutions by up to 93% while reducing the communication overheads and the energy cost by 80%.
- Score: 1.2057886807886689
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The increasing cloudification and softwarization of networks foster the interplay among multiple independently managed deployments. An appealing reason for such an interplay lies in distributed Machine Learning (ML), which allows the creation of robust ML models by leveraging collective intelligence and computational power. In this paper, we study the application of the two cornerstones of distributed learning, namely Federated Learning (FL) and Knowledge Distillation (KD), on the Wi-Fi Access Point (AP) load prediction use case. The analysis conducted in this paper is done on a dataset that contains real measurements from a large Wi-Fi campus network, which we use to train the ML model under study based on different strategies. Performance evaluation includes relevant aspects for the suitability of distributed learning operation in real use cases, including the predictive performance, the associated communication overheads, or the energy consumption. In particular, we prove that distributed learning can improve the predictive accuracy centralized ML solutions by up to 93% while reducing the communication overheads and the energy cost by 80%.
Related papers
- FedMSE: Federated learning for IoT network intrusion detection [0.0]
The rise of IoT has expanded the cyber attack surface, making traditional centralized machine learning methods insufficient due to concerns about data availability, computational resources, transfer costs, and especially privacy preservation.
A semi-supervised federated learning model was developed to overcome these issues, combining the Shrink Autoencoder and Centroid one-class classifier (SAE-CEN)
This approach enhances the performance of intrusion detection by effectively representing normal network data and accurately identifying anomalies in the decentralized strategy.
arXiv Detail & Related papers (2024-10-18T02:23:57Z) - Device Sampling and Resource Optimization for Federated Learning in Cooperative Edge Networks [17.637761046608]
Federated learning (FedL) distributes machine learning (ML) across worker devices by having them train local models that are periodically aggregated by a server.
FedL ignores two important characteristics of contemporary wireless networks: (i) the network may contain heterogeneous communication/computation resources, and (ii) there may be significant overlaps in devices' local data distributions.
We develop a novel optimization methodology that jointly accounts for these factors via intelligent device sampling complemented by device-to-device (D2D) offloading.
arXiv Detail & Related papers (2023-11-07T21:17:59Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Enhanced Decentralized Federated Learning based on Consensus in
Connected Vehicles [14.80476265018825]
Federated learning (FL) is emerging as a new paradigm to train machine learning (ML) models in distributed systems.
We introduce C-DFL (Consensus based Decentralized Federated Learning) to tackle federated learning on connected vehicles.
arXiv Detail & Related papers (2022-09-22T01:21:23Z) - Mobility-Aware Cluster Federated Learning in Hierarchical Wireless
Networks [81.83990083088345]
We develop a theoretical model to characterize the hierarchical federated learning (HFL) algorithm in wireless networks.
Our analysis proves that the learning performance of HFL deteriorates drastically with highly-mobile users.
To circumvent these issues, we propose a mobility-aware cluster federated learning (MACFL) algorithm.
arXiv Detail & Related papers (2021-08-20T10:46:58Z) - Distributed Learning in Wireless Networks: Recent Progress and Future
Challenges [170.35951727508225]
Next-generation wireless networks will enable many machine learning (ML) tools and applications to analyze various types of data collected by edge devices.
Distributed learning and inference techniques have been proposed as a means to enable edge devices to collaboratively train ML models without raw data exchanges.
This paper provides a comprehensive study of how distributed learning can be efficiently and effectively deployed over wireless edge networks.
arXiv Detail & Related papers (2021-04-05T20:57:56Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z) - Ternary Compression for Communication-Efficient Federated Learning [17.97683428517896]
Federated learning provides a potential solution to privacy-preserving and secure machine learning.
We propose a ternary federated averaging protocol (T-FedAvg) to reduce the upstream and downstream communication of federated learning systems.
Our results show that the proposed T-FedAvg is effective in reducing communication costs and can even achieve slightly better performance on non-IID data.
arXiv Detail & Related papers (2020-03-07T11:55:34Z) - Private and Communication-Efficient Edge Learning: A Sparse Differential
Gaussian-Masking Distributed SGD Approach [11.876314605344405]
We propose a new decentralized gradient method for distributed edge learning.
We show that SDM-DSGD improves the fundamental training-privacy trade-off by em two orders of magnitude.
arXiv Detail & Related papers (2020-01-12T03:04:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.