FedDec: Peer-to-peer Aided Federated Learning
- URL: http://arxiv.org/abs/2306.06715v1
- Date: Sun, 11 Jun 2023 16:30:57 GMT
- Title: FedDec: Peer-to-peer Aided Federated Learning
- Authors: Marina Costantini, Giovanni Neglia, and Thrasyvoulos Spyropoulos
- Abstract summary: Federated learning (FL) has enabled training machine learning models exploiting the data of multiple agents without compromising privacy.
FL is known to be vulnerable to data heterogeneity, partial device participation, and infrequent communication with the server.
We present FedDec, an algorithm that interleaves peer-to-peer communication and parameter averaging between the local gradient updates of FL.
- Score: 15.952956981784219
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) has enabled training machine learning models
exploiting the data of multiple agents without compromising privacy. However,
FL is known to be vulnerable to data heterogeneity, partial device
participation, and infrequent communication with the server, which are
nonetheless three distinctive characteristics of this framework. While much of
the recent literature has tackled these weaknesses using different tools, only
a few works have explored the possibility of exploiting inter-agent
communication to improve FL's performance. In this work, we present FedDec, an
algorithm that interleaves peer-to-peer communication and parameter averaging
(similar to decentralized learning in networks) between the local gradient
updates of FL. We analyze the convergence of FedDec under the assumptions of
non-iid data distribution, partial device participation, and smooth and
strongly convex costs, and show that inter-agent communication alleviates the
negative impact of infrequent communication rounds with the server by reducing
the dependence on the number of local updates $H$ from $O(H^2)$ to $O(H)$.
Furthermore, our analysis reveals that the term improved in the bound is
multiplied by a constant that depends on the spectrum of the inter-agent
communication graph, and that vanishes quickly the more connected the network
is. We confirm the predictions of our theory in numerical simulations, where we
show that FedDec converges faster than FedAvg, and that the gains are greater
as either $H$ or the connectivity of the network increase.
Related papers
- Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis [50.18156030818883]
Anomaly and missing data constitute a thorny problem in industrial applications.
Deep learning enabled anomaly detection has emerged as a critical direction.
The data collected in edge devices contain user privacy.
arXiv Detail & Related papers (2024-11-06T15:38:31Z) - Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Momentum Benefits Non-IID Federated Learning Simply and Provably [22.800862422479913]
Federated learning is a powerful paradigm for large-scale machine learning.
FedAvg and SCAFFOLD are two prominent algorithms to address these challenges.
This paper explores the utilization of momentum to enhance the performance of FedAvg and SCAFFOLD.
arXiv Detail & Related papers (2023-06-28T18:52:27Z) - CFLIT: Coexisting Federated Learning and Information Transfer [18.30671838758503]
We study the coexistence of over-the-air FL and traditional information transfer (IT) in a mobile edge network.
We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system.
arXiv Detail & Related papers (2022-07-26T13:17:28Z) - Resource Allocation for Compression-aided Federated Learning with High
Distortion Rate [3.7530276852356645]
We formulate an optimization-aided FL problem between the distortion rate, number of participating IoT devices, and convergence rate.
By actively controlling participating IoT devices, we can avoid the training divergence of compression-aided FL while maintaining the communication efficiency.
arXiv Detail & Related papers (2022-06-02T05:00:37Z) - Federated Dynamic Sparse Training: Computing Less, Communicating Less,
Yet Learning Better [88.28293442298015]
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.
We develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST)
FedDST is a dynamic process that extracts and trains sparse sub-networks from the target full network.
arXiv Detail & Related papers (2021-12-18T02:26:38Z) - Spatio-Temporal Federated Learning for Massive Wireless Edge Networks [23.389249751372393]
An edge server and numerous mobile devices (clients) jointly learn a global model without transporting huge amount of data collected by the mobile devices to the edge server.
The proposed FL approach exploits spatial and temporal correlations between learning updates from different mobile devices scheduled to join STFL in various trainings.
An analytical framework of STFL is proposed and employed to study the learning capability of STFL via its convergence performance.
arXiv Detail & Related papers (2021-10-27T16:46:45Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.