Federated Learning Beyond the Star: Local D2D Model Consensus with
Global Cluster Sampling
- URL: http://arxiv.org/abs/2109.03350v1
- Date: Tue, 7 Sep 2021 21:48:17 GMT
- Title: Federated Learning Beyond the Star: Local D2D Model Consensus with
Global Cluster Sampling
- Authors: Frank Po-Chen Lin, Seyyedali Hosseinalipour, Sheikh Shams Azam,
Christopher G. Brinton, and Nicol\`o Michelusi
- Abstract summary: Federated learning has emerged as a popular technique for distributing model training across the network edge.
We propose two timescale hybrid federated learning (TT-HF), which migrates to a more distributed topology via device-to-device (D2D) communications.
Experimental results demonstrate the improvements in convergence and utilization that can be obtained by TT-HF over state-of-the-art federated learning baselines.
- Score: 9.976132745670458
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning has emerged as a popular technique for distributing model
training across the network edge. Its learning architecture is conventionally a
star topology between the devices and a central server. In this paper, we
propose two timescale hybrid federated learning (TT-HF), which migrates to a
more distributed topology via device-to-device (D2D) communications. In TT-HF,
local model training occurs at devices via successive gradient iterations, and
the synchronization process occurs at two timescales: (i) macro-scale, where
global aggregations are carried out via device-server interactions, and (ii)
micro-scale, where local aggregations are carried out via D2D cooperative
consensus formation in different device clusters. Our theoretical analysis
reveals how device, cluster, and network-level parameters affect the
convergence of TT-HF, and leads to a set of conditions under which a
convergence rate of O(1/t) is guaranteed. Experimental results demonstrate the
improvements in convergence and utilization that can be obtained by TT-HF over
state-of-the-art federated learning baselines.
Related papers
- TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Event-Triggered Decentralized Federated Learning over
Resource-Constrained Edge Devices [12.513477328344255]
Federated learning (FL) is a technique for distributed machine learning (ML)
In traditional FL algorithms, trained models at the edge are periodically sent to a central server for aggregation.
We develop a novel methodology for fully decentralized FL, where devices conduct model aggregation via cooperative consensus formation.
arXiv Detail & Related papers (2022-11-23T00:04:05Z) - Decentralized Event-Triggered Federated Learning with Heterogeneous
Communication Thresholds [12.513477328344255]
We propose a novel methodology for distributed model aggregations via asynchronous, event-triggered consensus iterations over a network graph topology.
We demonstrate that our methodology achieves the globally optimal learning model under standard assumptions in distributed learning and graph consensus literature.
arXiv Detail & Related papers (2022-04-07T20:35:37Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Two Timescale Hybrid Federated Learning with Cooperative D2D Local Model
Aggregations [10.702853653891902]
Federated learning has emerged as a popular technique for distributing machine learning (ML) model training across the wireless edge.
We propose two timescale hybrid federated learning (TT-HF), which is a hybrid between the device-to-server communication paradigm in federated learning and device-to-device (D2D) communications for model training.
arXiv Detail & Related papers (2021-03-18T18:58:45Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Multi-Stage Hybrid Federated Learning over Large-Scale D2D-Enabled Fog
Networks [61.30171206892684]
We develop a hybrid of intra- and inter-layer model learning that considers the network as a multi-layer cluster-based structure.
MH-FL considers the topology structures among the nodes in the clusters, including local networks formed via device-to-device (D2D) communications.
It orchestrates the devices at different network layers in a collaborative/cooperative manner to form local consensus on the model parameters.
arXiv Detail & Related papers (2020-07-18T20:03:07Z) - From Federated to Fog Learning: Distributed Machine Learning over
Heterogeneous Wireless Networks [71.23327876898816]
Federated learning has emerged as a technique for training ML models at the network edge by leveraging processing capabilities across the nodes that collect the data.
We advocate a new learning paradigm called fog learning which will intelligently distribute ML model training across the continuum of nodes from edge devices to cloud servers.
arXiv Detail & Related papers (2020-06-07T05:11:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.