Accelerating Federated Learning over Reliability-Agnostic Clients in
Mobile Edge Computing Systems
- URL: http://arxiv.org/abs/2007.14374v3
- Date: Fri, 23 Apr 2021 10:01:09 GMT
- Title: Accelerating Federated Learning over Reliability-Agnostic Clients in
Mobile Edge Computing Systems
- Authors: Wentai Wu, Ligang He, Weiwei Lin, Rui Mao
- Abstract summary: Federated learning has emerged as a promising privacy-preserving approach to facilitating AI applications.
It remains a big challenge to optimize the efficiency and effectiveness of FL when it is integrated with the MEC architecture.
In this paper, a multi-layer federated learning protocol called HybridFL is designed for the MEC architecture.
- Score: 15.923599062148135
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Mobile Edge Computing (MEC), which incorporates the Cloud, edge nodes and end
devices, has shown great potential in bringing data processing closer to the
data sources. Meanwhile, Federated learning (FL) has emerged as a promising
privacy-preserving approach to facilitating AI applications. However, it
remains a big challenge to optimize the efficiency and effectiveness of FL when
it is integrated with the MEC architecture. Moreover, the unreliable nature
(e.g., stragglers and intermittent drop-out) of end devices significantly slows
down the FL process and affects the global model's quality Xin such
circumstances. In this paper, a multi-layer federated learning protocol called
HybridFL is designed for the MEC architecture. HybridFL adopts two levels (the
edge level and the cloud level) of model aggregation enacting different
aggregation strategies. Moreover, in order to mitigate stragglers and end
device drop-out, we introduce regional slack factors into the stage of client
selection performed at the edge nodes using a probabilistic approach without
identifying or probing the state of end devices (whose reliability is
agnostic). We demonstrate the effectiveness of our method in modulating the
proportion of clients selected and present the convergence analysis for our
protocol. We have conducted extensive experiments with machine learning tasks
in different scales of MEC system. The results show that HybridFL improves the
FL training process significantly in terms of shortening the federated round
length, speeding up the global model's convergence (by up to 12X) and reducing
end device energy consumption (by up to 58%).
Related papers
- Heterogeneity-Aware Resource Allocation and Topology Design for Hierarchical Federated Edge Learning [9.900317349372383]
Federated Learning (FL) provides a privacy-preserving framework for training machine learning models on mobile edge devices.
Traditional FL algorithms, e.g., FedAvg, impose a heavy communication workload on these devices.
We propose a two-tier HFEL system, where edge devices are connected to edge servers and edge servers are interconnected through peer-to-peer (P2P) edge backhauls.
Our goal is to enhance the training efficiency of the HFEL system through strategic resource allocation and topology design.
arXiv Detail & Related papers (2024-09-29T01:48:04Z) - Agglomerative Federated Learning: Empowering Larger Model Training via End-Edge-Cloud Collaboration [10.90126132493769]
Agglomerative Federated Learning (FedAgg) is a novel EECC-empowered FL framework that allows the trained models from end, edge, to cloud to grow larger in size and stronger in generalization ability.
FedAgg outperforms state-of-the-art methods by an average of 4.53% accuracy gains and remarkable improvements in convergence rate.
arXiv Detail & Related papers (2023-12-01T06:18:45Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - AnycostFL: Efficient On-Demand Federated Learning over Heterogeneous
Edge Devices [20.52519915112099]
We propose a cost-adjustable FL framework, named AnycostFL, that enables diverse edge devices to efficiently perform local updates.
Experiment results indicate that, our learning framework can reduce up to 1.9 times of the training latency and energy consumption for realizing a reasonable global testing accuracy.
arXiv Detail & Related papers (2023-01-08T15:25:55Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point [51.47520726446029]
cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
arXiv Detail & Related papers (2022-03-26T00:41:57Z) - Reconfigurable Intelligent Surface Enabled Federated Learning: A Unified
Communication-Learning Design Approach [30.1988598440727]
We develop a unified communication-learning optimization problem to jointly optimize device selection, over-the-air transceiver design, and RIS configuration.
Numerical experiments show that the proposed design achieves substantial learning accuracy improvement compared with the state-of-the-art approaches.
arXiv Detail & Related papers (2020-11-20T08:54:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.