AutoFL: Enabling Heterogeneity-Aware Energy Efficient Federated Learning
- URL: http://arxiv.org/abs/2107.08147v1
- Date: Fri, 16 Jul 2021 23:41:26 GMT
- Title: AutoFL: Enabling Heterogeneity-Aware Energy Efficient Federated Learning
- Authors: Young Geun Kim and Carole-Jean Wu
- Abstract summary: Federated learning enables a cluster of decentralized mobile devices at the edge to collaboratively train a shared machine learning model.
This decentralized training approach is demonstrated as a practical solution to mitigate the risk of privacy leakage.
This paper jointly optimize time-to-convergence and energy efficiency of state-of-the-art FL use cases.
- Score: 7.802192899666384
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning enables a cluster of decentralized mobile devices at the
edge to collaboratively train a shared machine learning model, while keeping
all the raw training samples on device. This decentralized training approach is
demonstrated as a practical solution to mitigate the risk of privacy leakage.
However, enabling efficient FL deployment at the edge is challenging because of
non-IID training data distribution, wide system heterogeneity and
stochastic-varying runtime effects in the field. This paper jointly optimizes
time-to-convergence and energy efficiency of state-of-the-art FL use cases by
taking into account the stochastic nature of edge execution. We propose AutoFL
by tailor-designing a reinforcement learning algorithm that learns and
determines which K participant devices and per-device execution targets for
each FL model aggregation round in the presence of stochastic runtime variance,
system and data heterogeneity. By considering the unique characteristics of FL
edge deployment judiciously, AutoFL achieves 3.6 times faster model convergence
time and 4.7 and 5.2 times higher energy efficiency for local clients and
globally over the cluster of K participants, respectively.
Related papers
- Digital Twin-Assisted Federated Learning with Blockchain in Multi-tier Computing Systems [67.14406100332671]
In Industry 4.0 systems, resource-constrained edge devices engage in frequent data interactions.
This paper proposes a digital twin (DT) and federated digital twin (FL) scheme.
The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis.
arXiv Detail & Related papers (2024-11-04T17:48:02Z) - Efficient Data Distribution Estimation for Accelerated Federated Learning [5.085889377571319]
Federated Learning(FL) is a privacy-preserving machine learning paradigm where a global model is trained in-situ across a large number of distributed edge devices.
Devices are highly heterogeneous in both their system resources and training data.
Various client selection algorithms have been developed, showing promising performance improvement in terms of model coverage and accuracy.
arXiv Detail & Related papers (2024-06-03T20:33:17Z) - Adaptive Hybrid Model Pruning in Federated Learning through Loss Exploration [17.589308358508863]
We introduce AutoFLIP, an innovative approach that utilizes a federated loss exploration phase to drive adaptive hybrid pruning.
We show that AutoFLIP not only efficiently accelerates global convergence, but also achieves superior accuracy and robustness compared to traditional methods.
arXiv Detail & Related papers (2024-05-16T17:27:41Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - AEDFL: Efficient Asynchronous Decentralized Federated Learning with
Heterogeneous Devices [61.66943750584406]
We propose an Asynchronous Efficient Decentralized FL framework, i.e., AEDFL, in heterogeneous environments.
First, we propose an asynchronous FL system model with an efficient model aggregation method for improving the FL convergence.
Second, we propose a dynamic staleness-aware model update approach to achieve superior accuracy.
Third, we propose an adaptive sparse training method to reduce communication and computation costs without significant accuracy degradation.
arXiv Detail & Related papers (2023-12-18T05:18:17Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Towards Fairer and More Efficient Federated Learning via
Multidimensional Personalized Edge Models [36.84027517814128]
Federated learning (FL) trains massive and geographically distributed edge data while maintaining privacy.
We propose a Customized Federated Learning (CFL) system to eliminate FL heterogeneity from multiple dimensions.
CFL tailors personalized models from the specially designed global model for each client jointly guided by an online trained model-search helper and a novel aggregation algorithm.
arXiv Detail & Related papers (2023-02-09T06:55:19Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - AnycostFL: Efficient On-Demand Federated Learning over Heterogeneous
Edge Devices [20.52519915112099]
We propose a cost-adjustable FL framework, named AnycostFL, that enables diverse edge devices to efficiently perform local updates.
Experiment results indicate that, our learning framework can reduce up to 1.9 times of the training latency and energy consumption for realizing a reasonable global testing accuracy.
arXiv Detail & Related papers (2023-01-08T15:25:55Z) - FedGPO: Heterogeneity-Aware Global Parameter Optimization for Efficient
Federated Learning [11.093360539563657]
Federated learning (FL) has emerged as a solution to deal with the risk of privacy leaks in machine learning training.
We propose FedGPO to optimize the energy-efficiency of FL use cases while guaranteeing model convergence.
In our experiments, FedGPO improves the model convergence time by 2.4 times, and achieves 3.6 times higher energy efficiency over the baseline settings.
arXiv Detail & Related papers (2022-11-30T01:22:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.