AnycostFL: Efficient On-Demand Federated Learning over Heterogeneous
Edge Devices
- URL: http://arxiv.org/abs/2301.03062v1
- Date: Sun, 8 Jan 2023 15:25:55 GMT
- Title: AnycostFL: Efficient On-Demand Federated Learning over Heterogeneous
Edge Devices
- Authors: Peichun Li, Guoliang Cheng, Xumin Huang, Jiawen Kang, Rong Yu, Yuan
Wu, Miao Pan
- Abstract summary: We propose a cost-adjustable FL framework, named AnycostFL, that enables diverse edge devices to efficiently perform local updates.
Experiment results indicate that, our learning framework can reduce up to 1.9 times of the training latency and energy consumption for realizing a reasonable global testing accuracy.
- Score: 20.52519915112099
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work, we investigate the challenging problem of on-demand federated
learning (FL) over heterogeneous edge devices with diverse resource
constraints. We propose a cost-adjustable FL framework, named AnycostFL, that
enables diverse edge devices to efficiently perform local updates under a wide
range of efficiency constraints. To this end, we design the model shrinking to
support local model training with elastic computation cost, and the gradient
compression to allow parameter transmission with dynamic communication
overhead. An enhanced parameter aggregation is conducted in an element-wise
manner to improve the model performance. Focusing on AnycostFL, we further
propose an optimization design to minimize the global training loss with
personalized latency and energy constraints. By revealing the theoretical
insights of the convergence analysis, personalized training strategies are
deduced for different devices to match their locally available resources.
Experiment results indicate that, when compared to the state-of-the-art
efficient FL algorithms, our learning framework can reduce up to 1.9 times of
the training latency and energy consumption for realizing a reasonable global
testing accuracy. Moreover, the results also demonstrate that, our approach
significantly improves the converged global accuracy.
Related papers
- Adaptive Decentralized Federated Learning in Energy and Latency Constrained Wireless Networks [4.03161352925235]
In Federated Learning (FL), with parameter aggregated by a central node, the communication overhead is a substantial concern.
Recent studies have introduced Decentralized Federated Learning (DFL) as a viable alternative.
We formulate a problem that minimizes the loss function of DFL while considering energy and latency constraints.
arXiv Detail & Related papers (2024-03-29T09:17:40Z) - Federated Learning of Large Language Models with Parameter-Efficient
Prompt Tuning and Adaptive Optimization [71.87335804334616]
Federated learning (FL) is a promising paradigm to enable collaborative model training with decentralized data.
The training process of Large Language Models (LLMs) generally incurs the update of significant parameters.
This paper proposes an efficient partial prompt tuning approach to improve performance and efficiency simultaneously.
arXiv Detail & Related papers (2023-10-23T16:37:59Z) - Filling the Missing: Exploring Generative AI for Enhanced Federated
Learning over Heterogeneous Mobile Edge Devices [72.61177465035031]
We propose a generative AI-empowered federated learning to address these challenges by leveraging the idea of FIlling the MIssing (FIMI) portion of local data.
Experiment results demonstrate that FIMI can save up to 50% of the device-side energy to achieve the target global test accuracy.
arXiv Detail & Related papers (2023-10-21T12:07:04Z) - Sample-Driven Federated Learning for Energy-Efficient and Real-Time IoT
Sensing [22.968661040226756]
We introduce an online reinforcement learning algorithm named Sample-driven Control for Federated Learning (SCFL) built on the Soft Actor-Critic (A2C) framework.
SCFL enables the agent to dynamically adapt and find the global optima even in changing environments.
arXiv Detail & Related papers (2023-10-11T13:50:28Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Cost-Effective Federated Learning in Mobile Edge Networks [37.16466118235272]
Federated learning (FL) is a distributed learning paradigm that enables a large number of mobile devices to collaboratively learn a model without sharing their raw data.
We analyze how to design adaptive FL in mobile edge networks that optimally chooses essential control variables to minimize the total cost.
We develop a low-cost sampling-based algorithm to learn the convergence related unknown parameters.
arXiv Detail & Related papers (2021-09-12T03:02:24Z) - Accelerating Federated Learning with a Global Biased Optimiser [16.69005478209394]
Federated Learning (FL) is a recent development in the field of machine learning that collaboratively trains models without the training data leaving client devices.
We propose a novel, generalised approach for applying adaptive optimisation techniques to FL with the Federated Global Biased Optimiser (FedGBO) algorithm.
FedGBO accelerates FL by applying a set of global biased optimiser values during the local training phase of FL, which helps to reduce client-drift' from non-IID data.
arXiv Detail & Related papers (2021-08-20T12:08:44Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - Accelerating Federated Learning over Reliability-Agnostic Clients in
Mobile Edge Computing Systems [15.923599062148135]
Federated learning has emerged as a promising privacy-preserving approach to facilitating AI applications.
It remains a big challenge to optimize the efficiency and effectiveness of FL when it is integrated with the MEC architecture.
In this paper, a multi-layer federated learning protocol called HybridFL is designed for the MEC architecture.
arXiv Detail & Related papers (2020-07-28T17:35:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.