FedFog: Network-Aware Optimization of Federated Learning over Wireless
Fog-Cloud Systems
- URL: http://arxiv.org/abs/2107.02755v1
- Date: Sun, 4 Jul 2021 08:03:15 GMT
- Title: FedFog: Network-Aware Optimization of Federated Learning over Wireless
Fog-Cloud Systems
- Authors: Van-Dinh Nguyen, Symeon Chatzinotas, Bjorn Ottersten, and Trung Q.
Duong
- Abstract summary: Federated learning (FL) is capable of performing large distributed machine learning tasks across multiple edge users by periodically aggregating trained local parameters.
We first propose an efficient FL algorithm (called FedFog) to perform the local aggregation of gradient parameters at fog servers and global training update at the cloud.
- Score: 40.421253127588244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is capable of performing large distributed machine
learning tasks across multiple edge users by periodically aggregating trained
local parameters. To address key challenges of enabling FL over a wireless
fog-cloud system (e.g., non-i.i.d. data, users' heterogeneity), we first
propose an efficient FL algorithm (called FedFog) to perform the local
aggregation of gradient parameters at fog servers and global training update at
the cloud. Next, we employ FedFog in wireless fog-cloud systems by
investigating a novel network-aware FL optimization problem that strikes the
balance between the global loss and completion time. An iterative algorithm is
then developed to obtain a precise measurement of the system performance, which
helps design an efficient stopping criteria to output an appropriate number of
global rounds. To mitigate the straggler effect, we propose a flexible user
aggregation strategy that trains fast users first to obtain a certain level of
accuracy before allowing slow users to join the global training updates.
Extensive numerical results using several real-world FL tasks are provided to
verify the theoretical convergence of FedFog. We also show that the proposed
co-design of FL and communication is essential to substantially improve
resource utilization while achieving comparable accuracy of the learning model.
Related papers
- FLrce: Resource-Efficient Federated Learning with Early-Stopping Strategy [7.963276533979389]
Federated Learning (FL) achieves great popularity in the Internet of Things (IoT)
We present FLrce, an efficient FL framework with a relationship-based client selection and early-stopping strategy.
Experiment results show that, compared with existing efficient FL frameworks, FLrce improves the computation and communication efficiency by at least 30% and 43% respectively.
arXiv Detail & Related papers (2023-10-15T10:13:44Z) - FedNAR: Federated Optimization with Normalized Annealing Regularization [54.42032094044368]
We explore the choices of weight decay and identify that weight decay value appreciably influences the convergence of existing FL algorithms.
We develop Federated optimization with Normalized Annealing Regularization (FedNAR), a plug-in that can be seamlessly integrated into any existing FL algorithms.
arXiv Detail & Related papers (2023-10-04T21:11:40Z) - Wirelessly Powered Federated Learning Networks: Joint Power Transfer,
Data Sensing, Model Training, and Resource Allocation [24.077525032187893]
Federated learning (FL) has found many successes in wireless networks.
implementation of FL has been hindered by the energy limitation of mobile devices (MDs) and the availability of training data at MDs.
How to integrate wireless power transfer and sustainable sustainable FL networks.
arXiv Detail & Related papers (2023-08-09T13:38:58Z) - FLCC: Efficient Distributed Federated Learning on IoMT over CSMA/CA [0.0]
Federated Learning (FL) has emerged as a promising approach for privacy preservation.
This article investigates the performance of FL on an application that might be used to improve a remote healthcare system over ad hoc networks.
We present two metrics to evaluate the network performance: 1) probability of successful transmission while minimizing the interference, and 2) performance of distributed FL model in terms of accuracy and loss.
arXiv Detail & Related papers (2023-03-29T16:36:42Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - FedGPO: Heterogeneity-Aware Global Parameter Optimization for Efficient
Federated Learning [11.093360539563657]
Federated learning (FL) has emerged as a solution to deal with the risk of privacy leaks in machine learning training.
We propose FedGPO to optimize the energy-efficiency of FL use cases while guaranteeing model convergence.
In our experiments, FedGPO improves the model convergence time by 2.4 times, and achieves 3.6 times higher energy efficiency over the baseline settings.
arXiv Detail & Related papers (2022-11-30T01:22:57Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Federated Dynamic Sparse Training: Computing Less, Communicating Less,
Yet Learning Better [88.28293442298015]
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.
We develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST)
FedDST is a dynamic process that extracts and trains sparse sub-networks from the target full network.
arXiv Detail & Related papers (2021-12-18T02:26:38Z) - EdgeML: Towards Network-Accelerated Federated Learning over Wireless
Edge [11.49608766562657]
Federated learning (FL) is a distributed machine learning technology for next-generation AI systems.
This paper aims to accelerate FL convergence over wireless edge by optimizing the multi-hop federated networking performance.
arXiv Detail & Related papers (2021-10-14T14:06:57Z) - Convergence Time Optimization for Federated Learning over Wireless
Networks [160.82696473996566]
A wireless network is considered in which wireless users transmit their local FL models (trained using their locally collected data) to a base station (BS)
The BS, acting as a central controller, generates a global FL model using the received local FL models and broadcasts it back to all users.
Due to the limited number of resource blocks (RBs) in a wireless network, only a subset of users can be selected to transmit their local FL model parameters to the BS.
Since each user has unique training data samples, the BS prefers to include all local user FL models to generate a converged global FL model.
arXiv Detail & Related papers (2020-01-22T01:55:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.