Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point
- URL: http://arxiv.org/abs/2203.13950v1
- Date: Sat, 26 Mar 2022 00:41:57 GMT
- Title: Multi-Edge Server-Assisted Dynamic Federated Learning with an Optimized
Floating Aggregation Point
- Authors: Bhargav Ganguly, Seyyedali Hosseinalipour, Kwang Taik Kim, Christopher
G. Brinton, Vaneet Aggarwal, David J. Love, Mung Chiang
- Abstract summary: cooperative edge learning (CE-FL) is a distributed machine learning architecture.
We model the processes taken during CE-FL, and conduct analytical training.
We show the effectiveness of our framework with the data collected from a real-world testbed.
- Score: 51.47520726446029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose cooperative edge-assisted dynamic federated learning (CE-FL).
CE-FL introduces a distributed machine learning (ML) architecture, where data
collection is carried out at the end devices, while the model training is
conducted cooperatively at the end devices and the edge servers, enabled via
data offloading from the end devices to the edge servers through base stations.
CE-FL also introduces floating aggregation point, where the local models
generated at the devices and the servers are aggregated at an edge server,
which varies from one model training round to another to cope with the network
evolution in terms of data distribution and users' mobility. CE-FL considers
the heterogeneity of network elements in terms of communication/computation
models and the proximity to one another. CE-FL further presumes a dynamic
environment with online variation of data at the network devices which causes a
drift at the ML model performance. We model the processes taken during CE-FL,
and conduct analytical convergence analysis of its ML model training. We then
formulate network-aware CE-FL which aims to adaptively optimize all the network
elements via tuning their contribution to the learning process, which turns out
to be a non-convex mixed integer problem. Motivated by the large scale of the
system, we propose a distributed optimization solver to break down the
computation of the solution across the network elements. We finally demonstrate
the effectiveness of our framework with the data collected from a real-world
testbed.
Related papers
- Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - ON-DEMAND-FL: A Dynamic and Efficient Multi-Criteria Federated Learning
Client Deployment Scheme [37.099990745974196]
We introduce an On-Demand-FL, a client deployment approach for federated learning.
We make use of containerization technology such as Docker to build efficient environments.
The Genetic algorithm (GA) is used to solve the multi-objective optimization problem.
arXiv Detail & Related papers (2022-11-05T13:41:19Z) - Enhanced Decentralized Federated Learning based on Consensus in
Connected Vehicles [14.80476265018825]
Federated learning (FL) is emerging as a new paradigm to train machine learning (ML) models in distributed systems.
We introduce C-DFL (Consensus based Decentralized Federated Learning) to tackle federated learning on connected vehicles.
arXiv Detail & Related papers (2022-09-22T01:21:23Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Computational Intelligence and Deep Learning for Next-Generation
Edge-Enabled Industrial IoT [51.68933585002123]
We investigate how to deploy computational intelligence and deep learning (DL) in edge-enabled industrial IoT networks.
In this paper, we propose a novel multi-exit-based federated edge learning (ME-FEEL) framework.
In particular, the proposed ME-FEEL can achieve an accuracy gain up to 32.7% in the industrial IoT networks with the severely limited resources.
arXiv Detail & Related papers (2021-10-28T08:14:57Z) - Device Scheduling and Update Aggregation Policies for Asynchronous
Federated Learning [72.78668894576515]
Federated Learning (FL) is a newly emerged decentralized machine learning (ML) framework.
We propose an asynchronous FL framework with periodic aggregation to eliminate the straggler issue in FL systems.
arXiv Detail & Related papers (2021-07-23T18:57:08Z) - IPLS : A Framework for Decentralized Federated Learning [6.6271520914941435]
We introduce IPLS, a fully decentralized federated learning framework that is partially based on the interplanetary file system (IPFS)
IPLS scales with the number of participants, is robust against intermittent connectivity and dynamic participant departures/arrivals, requires minimal resources, and guarantees that the accuracy of the trained model quickly converges to that of a centralized FL framework with an accuracy drop of less than one per thousand.
arXiv Detail & Related papers (2021-01-06T07:44:51Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - Federated Learning with Cooperating Devices: A Consensus Approach for
Massive IoT Networks [8.456633924613456]
Federated learning (FL) is emerging as a new paradigm to train machine learning models in distributed systems.
The paper proposes a fully distributed (or server-less) learning approach: the proposed FL algorithms leverage the cooperation of devices that perform data operations inside the network.
The approach lays the groundwork for integration of FL within 5G and beyond networks characterized by decentralized connectivity and computing.
arXiv Detail & Related papers (2019-12-27T15:16:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.