Decentralized Federated Learning: Balancing Communication and Computing
Costs
- URL: http://arxiv.org/abs/2107.12048v1
- Date: Mon, 26 Jul 2021 09:09:45 GMT
- Title: Decentralized Federated Learning: Balancing Communication and Computing
Costs
- Authors: Wei Liu, Li Chen, and Wenyi Zhang
- Abstract summary: Decentralized federated learning (DFL) is a powerful framework of distributed machine learning.
We propose a general decentralized federated learning framework to strike a balance between communication-efficiency and convergence performance.
Experiment results based on MNIST and CIFAR-10 datasets illustrate the superiority of DFL over traditional decentralized SGD methods.
- Score: 21.694468026280806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decentralized federated learning (DFL) is a powerful framework of distributed
machine learning and decentralized stochastic gradient descent (SGD) is a
driving engine for DFL. The performance of decentralized SGD is jointly
influenced by communication-efficiency and convergence rate. In this paper, we
propose a general decentralized federated learning framework to strike a
balance between communication-efficiency and convergence performance. The
proposed framework performs both multiple local updates and multiple inter-node
communications periodically, unifying traditional decentralized SGD methods. We
establish strong convergence guarantees for the proposed DFL algorithm without
the assumption of convex objective function. The balance of communication and
computation rounds is essential to optimize decentralized federated learning
under constrained communication and computation resources. For further
improving communication-efficiency of DFL, compressed communication is applied
to DFL, named DFL with compressed communication (C-DFL). The proposed C-DFL
exhibits linear convergence for strongly convex objectives. Experiment results
based on MNIST and CIFAR-10 datasets illustrate the superiority of DFL over
traditional decentralized SGD methods and show that C-DFL further enhances
communication-efficiency.
Related papers
- SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Decentralized Personalized Federated Learning based on a Conditional Sparse-to-Sparser Scheme [5.5058010121503]
Decentralized Federated Learning (DFL) has become popular due to its robustness and avoidance of centralized coordination.
We propose a novel textitsparse-to-sparser training scheme: DA-DPFL.
Our experiments showcase that DA-DPFL substantially outperforms DFL baselines in test accuracy, while achieving up to $5$ times reduction in energy costs.
arXiv Detail & Related papers (2024-04-24T16:03:34Z) - OCD-FL: A Novel Communication-Efficient Peer Selection-based Decentralized Federated Learning [2.203783085755103]
We propose an opportunistic communication-efficient decentralized federated learning (OCD-FL) scheme.
OCD-FL consists of a systematic FL peer selection for collaboration, aiming to achieve maximum FL knowledge gain while reducing energy consumption.
Experimental results demonstrate the capability of OCD-FL to achieve similar or better performances than the fully collaborative FL, while significantly reducing consumed energy by at least 30% and up to 80%.
arXiv Detail & Related papers (2024-03-06T20:34:08Z) - Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - Communication Resources Constrained Hierarchical Federated Learning for
End-to-End Autonomous Driving [67.78611905156808]
This paper proposes an optimization-based Communication Resource Constrained Hierarchical Federated Learning framework.
Results show that the proposed CRCHFL both accelerates the convergence rate and enhances the generalization of federated learning autonomous driving model.
arXiv Detail & Related papers (2023-06-28T12:44:59Z) - Decentralized Federated Learning: A Survey and Perspective [45.81975053649379]
Decentralized FL (DFL) is a decentralized network architecture that eliminates the need for a central server.
DFL enables direct communication between clients, resulting in significant savings in communication resources.
arXiv Detail & Related papers (2023-06-02T15:12:58Z) - Communication-Efficient Consensus Mechanism for Federated Reinforcement
Learning [20.891460617583302]
We show that FL can improve the policy performance of IRL in terms of training efficiency and stability.
To reach a good balance between improving the model's convergence performance and reducing the required communication and computation overheads, this paper proposes a system utility function.
arXiv Detail & Related papers (2022-01-30T04:04:24Z) - Finite-Time Consensus Learning for Decentralized Optimization with
Nonlinear Gossiping [77.53019031244908]
We present a novel decentralized learning framework based on nonlinear gossiping (NGO), that enjoys an appealing finite-time consensus property to achieve better synchronization.
Our analysis on how communication delay and randomized chats affect learning further enables the derivation of practical variants.
arXiv Detail & Related papers (2021-11-04T15:36:25Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Wireless Communications for Collaborative Federated Learning [160.82696473996566]
Internet of Things (IoT) devices may not be able to transmit their collected data to a central controller for training machine learning models.
Google's seminal FL algorithm requires all devices to be directly connected with a central controller.
This paper introduces a novel FL framework, called collaborative FL (CFL), which enables edge devices to implement FL with less reliance on a central controller.
arXiv Detail & Related papers (2020-06-03T20:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.