FedDec: Peer-to-peer Aided Federated Learning
- URL: http://arxiv.org/abs/2306.06715v1
- Date: Sun, 11 Jun 2023 16:30:57 GMT
- Title: FedDec: Peer-to-peer Aided Federated Learning
- Authors: Marina Costantini, Giovanni Neglia, and Thrasyvoulos Spyropoulos
- Abstract summary: Federated learning (FL) has enabled training machine learning models exploiting the data of multiple agents without compromising privacy.
FL is known to be vulnerable to data heterogeneity, partial device participation, and infrequent communication with the server.
We present FedDec, an algorithm that interleaves peer-to-peer communication and parameter averaging between the local gradient updates of FL.
- Score: 15.952956981784219
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) has enabled training machine learning models
exploiting the data of multiple agents without compromising privacy. However,
FL is known to be vulnerable to data heterogeneity, partial device
participation, and infrequent communication with the server, which are
nonetheless three distinctive characteristics of this framework. While much of
the recent literature has tackled these weaknesses using different tools, only
a few works have explored the possibility of exploiting inter-agent
communication to improve FL's performance. In this work, we present FedDec, an
algorithm that interleaves peer-to-peer communication and parameter averaging
(similar to decentralized learning in networks) between the local gradient
updates of FL. We analyze the convergence of FedDec under the assumptions of
non-iid data distribution, partial device participation, and smooth and
strongly convex costs, and show that inter-agent communication alleviates the
negative impact of infrequent communication rounds with the server by reducing
the dependence on the number of local updates $H$ from $O(H^2)$ to $O(H)$.
Furthermore, our analysis reveals that the term improved in the bound is
multiplied by a constant that depends on the spectrum of the inter-agent
communication graph, and that vanishes quickly the more connected the network
is. We confirm the predictions of our theory in numerical simulations, where we
show that FedDec converges faster than FedAvg, and that the gains are greater
as either $H$ or the connectivity of the network increase.
Related papers
- How Robust is Federated Learning to Communication Error? A Comparison
Study Between Uplink and Downlink Channels [13.885735785986164]
This paper investigates the robustness of federated learning to the uplink and downlink communication error.
It is shown that the uplink communication in FL can tolerate a higher bit error rate (BER) than downlink communication.
arXiv Detail & Related papers (2023-10-25T14:03:11Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Momentum Benefits Non-IID Federated Learning Simply and Provably [22.800862422479913]
Federated learning is a powerful paradigm for large-scale machine learning.
FedAvg and SCAFFOLD are two prominent algorithms to address these challenges.
This paper explores the utilization of momentum to enhance the performance of FedAvg and SCAFFOLD.
arXiv Detail & Related papers (2023-06-28T18:52:27Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - CFLIT: Coexisting Federated Learning and Information Transfer [18.30671838758503]
We study the coexistence of over-the-air FL and traditional information transfer (IT) in a mobile edge network.
We propose a coexisting federated learning and information transfer (CFLIT) communication framework, where the FL and IT devices share the wireless spectrum in an OFDM system.
arXiv Detail & Related papers (2022-07-26T13:17:28Z) - Resource Allocation for Compression-aided Federated Learning with High
Distortion Rate [3.7530276852356645]
We formulate an optimization-aided FL problem between the distortion rate, number of participating IoT devices, and convergence rate.
By actively controlling participating IoT devices, we can avoid the training divergence of compression-aided FL while maintaining the communication efficiency.
arXiv Detail & Related papers (2022-06-02T05:00:37Z) - Federated Dynamic Sparse Training: Computing Less, Communicating Less,
Yet Learning Better [88.28293442298015]
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.
We develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST)
FedDST is a dynamic process that extracts and trains sparse sub-networks from the target full network.
arXiv Detail & Related papers (2021-12-18T02:26:38Z) - Spatio-Temporal Federated Learning for Massive Wireless Edge Networks [23.389249751372393]
An edge server and numerous mobile devices (clients) jointly learn a global model without transporting huge amount of data collected by the mobile devices to the edge server.
The proposed FL approach exploits spatial and temporal correlations between learning updates from different mobile devices scheduled to join STFL in various trainings.
An analytical framework of STFL is proposed and employed to study the learning capability of STFL via its convergence performance.
arXiv Detail & Related papers (2021-10-27T16:46:45Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.