DynamicFL: Balancing Communication Dynamics and Client Manipulation for
Federated Learning
- URL: http://arxiv.org/abs/2308.06267v1
- Date: Sun, 16 Jul 2023 19:09:31 GMT
- Title: DynamicFL: Balancing Communication Dynamics and Client Manipulation for
Federated Learning
- Authors: Bocheng Chen, Nikolay Ivanov, Guangjing Wang, Qiben Yan
- Abstract summary: Federated Learning (FL) aims to train a global model by exploiting the decentralized data across millions of edge devices.
Given the geo-distributed edge devices with highly dynamic networks in the wild, aggregating all the model updates from those participating devices will result in inevitable long-tail delays in FL.
We propose a novel FL framework, DynamicFL, by considering the communication dynamics and data quality across massive edge devices with a specially designed client manipulation strategy.
- Score: 6.9138560535971605
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated Learning (FL) is a distributed machine learning (ML) paradigm,
aiming to train a global model by exploiting the decentralized data across
millions of edge devices. Compared with centralized learning, FL preserves the
clients' privacy by refraining from explicitly downloading their data. However,
given the geo-distributed edge devices (e.g., mobile, car, train, or subway)
with highly dynamic networks in the wild, aggregating all the model updates
from those participating devices will result in inevitable long-tail delays in
FL. This will significantly degrade the efficiency of the training process. To
resolve the high system heterogeneity in time-sensitive FL scenarios, we
propose a novel FL framework, DynamicFL, by considering the communication
dynamics and data quality across massive edge devices with a specially designed
client manipulation strategy. \ours actively selects clients for model updating
based on the network prediction from its dynamic network conditions and the
quality of its training data. Additionally, our long-term greedy strategy in
client selection tackles the problem of system performance degradation caused
by short-term scheduling in a dynamic network. Lastly, to balance the trade-off
between client performance evaluation and client manipulation granularity, we
dynamically adjust the length of the observation window in the training process
to optimize the long-term system efficiency. Compared with the state-of-the-art
client selection scheme in FL, \ours can achieve a better model accuracy while
consuming only 18.9\% -- 84.0\% of the wall-clock time. Our component-wise and
sensitivity studies further demonstrate the robustness of \ours under various
real-life scenarios.
Related papers
- HierSFL: Local Differential Privacy-aided Split Federated Learning in
Mobile Edge Computing [7.180235086275924]
Federated Learning is a promising approach for learning from user data while preserving data privacy.
Split Federated Learning is utilized, where clients upload their intermediate model training outcomes to a cloud server for collaborative server-client model training.
This methodology facilitates resource-constrained clients' participation in model training but also increases the training time and communication overhead.
We propose a novel algorithm, called Hierarchical Split Federated Learning (HierSFL), that amalgamates models at the edge and cloud phases.
arXiv Detail & Related papers (2024-01-16T09:34:10Z) - Speed Up Federated Learning in Heterogeneous Environment: A Dynamic
Tiering Approach [5.504000607257414]
Federated learning (FL) enables collaboratively training a model while keeping the training data decentralized and private.
One significant impediment to training a model using FL, especially large models, is the resource constraints of devices with heterogeneous computation and communication capacities as well as varying task sizes.
We propose the Dynamic Tiering-based Federated Learning (DTFL) system where slower clients dynamically offload part of the model to the server to alleviate resource constraints and speed up training.
arXiv Detail & Related papers (2023-12-09T19:09:19Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Joint Age-based Client Selection and Resource Allocation for
Communication-Efficient Federated Learning over NOMA Networks [8.030674576024952]
In federated learning (FL), distributed clients can collaboratively train a shared global model while retaining their own training data locally.
In this paper, a joint optimization problem of client selection and resource allocation is formulated, aiming to minimize the total time consumption of each round in FL over a non-orthogonal multiple access (NOMA) enabled wireless network.
In addition, a server-side artificial neural network (ANN) is proposed to predict the FL models of clients who are not selected at each round to further improve FL performance.
arXiv Detail & Related papers (2023-04-18T13:58:16Z) - Client Selection for Generalization in Accelerated Federated Learning: A
Multi-Armed Bandit Approach [20.300740276237523]
Federated learning (FL) is an emerging machine learning (ML) paradigm used to train models across multiple nodes (i.e., clients) holding local data sets.
We develop a novel algorithm to achieve this goal, dubbed Bandit Scheduling for FL (BSFL)
arXiv Detail & Related papers (2023-03-18T09:45:58Z) - FedLE: Federated Learning Client Selection with Lifespan Extension for
Edge IoT Networks [34.63384007690422]
Federated learning (FL) is a distributed and privacy-preserving learning framework for predictive modeling with massive data generated at the edge by Internet of Things (IoT) devices.
One major challenge preventing the wide adoption of FL in IoT is the pervasive power supply constraints of IoT devices.
We propose FedLE, an energy-efficient client selection framework that enables extension of edge IoT networks.
arXiv Detail & Related papers (2023-02-14T19:41:24Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.