Two Timescale Hybrid Federated Learning with Cooperative D2D Local Model
Aggregations
- URL: http://arxiv.org/abs/2103.10481v1
- Date: Thu, 18 Mar 2021 18:58:45 GMT
- Title: Two Timescale Hybrid Federated Learning with Cooperative D2D Local Model
Aggregations
- Authors: Frank Po-Chen Lin, Seyyedali Hosseinalipour, Sheikh Shams Azam,
Christopher G. Brinton, Nicolo Michelusi
- Abstract summary: Federated learning has emerged as a popular technique for distributing machine learning (ML) model training across the wireless edge.
We propose two timescale hybrid federated learning (TT-HF), which is a hybrid between the device-to-server communication paradigm in federated learning and device-to-device (D2D) communications for model training.
- Score: 10.702853653891902
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning has emerged as a popular technique for distributing
machine learning (ML) model training across the wireless edge. In this paper,
we propose two timescale hybrid federated learning (TT-HF), which is a hybrid
between the device-to-server communication paradigm in federated learning and
device-to-device (D2D) communications for model training. In TT-HF, during each
global aggregation interval, devices (i) perform multiple stochastic gradient
descent iterations on their individual datasets, and (ii) aperiodically engage
in consensus formation of their model parameters through cooperative,
distributed D2D communications within local clusters. With a new general
definition of gradient diversity, we formally study the convergence behavior of
TT-HF, resulting in new convergence bounds for distributed ML. We leverage our
convergence bounds to develop an adaptive control algorithm that tunes the step
size, D2D communication rounds, and global aggregation period of TT-HF over
time to target a sublinear convergence rate of O(1/t) while minimizing network
resource utilization. Our subsequent experiments demonstrate that TT-HF
significantly outperforms the current art in federated learning in terms of
model accuracy and/or network energy consumption in different scenarios where
local device datasets exhibit statistical heterogeneity.
Related papers
- Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - FedDCT: A Dynamic Cross-Tier Federated Learning Framework in Wireless Networks [5.914766366715661]
Federated Learning (FL) trains a global model across devices without exposing local data.
resource heterogeneity and inevitable stragglers in wireless networks severely impact the efficiency and accuracy of FL training.
We propose a novel Dynamic Cross-Tier Federated Learning framework (FedDCT)
arXiv Detail & Related papers (2023-07-10T08:54:07Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Decentralized Event-Triggered Federated Learning with Heterogeneous
Communication Thresholds [12.513477328344255]
We propose a novel methodology for distributed model aggregations via asynchronous, event-triggered consensus iterations over a network graph topology.
We demonstrate that our methodology achieves the globally optimal learning model under standard assumptions in distributed learning and graph consensus literature.
arXiv Detail & Related papers (2022-04-07T20:35:37Z) - Time-Correlated Sparsification for Efficient Over-the-Air Model
Aggregation in Wireless Federated Learning [23.05003652536773]
Federated edge learning (FEEL) is a promising distributed machine learning (ML) framework to drive edge intelligence applications.
We propose time-correlated sparsification with hybrid aggregation (TCS-H) for communication-efficient FEEL.
arXiv Detail & Related papers (2022-02-17T02:48:07Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Federated Learning Beyond the Star: Local D2D Model Consensus with
Global Cluster Sampling [9.976132745670458]
Federated learning has emerged as a popular technique for distributing model training across the network edge.
We propose two timescale hybrid federated learning (TT-HF), which migrates to a more distributed topology via device-to-device (D2D) communications.
Experimental results demonstrate the improvements in convergence and utilization that can be obtained by TT-HF over state-of-the-art federated learning baselines.
arXiv Detail & Related papers (2021-09-07T21:48:17Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Multi-Stage Hybrid Federated Learning over Large-Scale D2D-Enabled Fog
Networks [61.30171206892684]
We develop a hybrid of intra- and inter-layer model learning that considers the network as a multi-layer cluster-based structure.
MH-FL considers the topology structures among the nodes in the clusters, including local networks formed via device-to-device (D2D) communications.
It orchestrates the devices at different network layers in a collaborative/cooperative manner to form local consensus on the model parameters.
arXiv Detail & Related papers (2020-07-18T20:03:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.