Communication-Efficient Federated Optimization over Semi-Decentralized
Networks
- URL: http://arxiv.org/abs/2311.18787v2
- Date: Thu, 11 Jan 2024 19:25:13 GMT
- Title: Communication-Efficient Federated Optimization over Semi-Decentralized
Networks
- Authors: He Wang, Yuejie Chi
- Abstract summary: Communication efficiency is one of the most challenging bottlenecks in large-scale networks.
We study the communication efficiency under semi-decentralized communication protocol, in which agents can perform both agent-to-agent and agent-to-server communication.
- Score: 42.11743453542266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In large-scale federated and decentralized learning, communication efficiency
is one of the most challenging bottlenecks. While gossip communication -- where
agents can exchange information with their connected neighbors -- is more
cost-effective than communicating with the remote server, it often requires a
greater number of communication rounds, especially for large and sparse
networks. To tackle the trade-off, we examine the communication efficiency
under a semi-decentralized communication protocol, in which agents can perform
both agent-to-agent and agent-to-server communication in a probabilistic
manner. We design a tailored communication-efficient algorithm over
semi-decentralized networks, referred to as PISCO, which inherits the
robustness to data heterogeneity thanks to gradient tracking and allows
multiple local updates for saving communication. We establish the convergence
rate of PISCO for nonconvex problems and show that PISCO enjoys a linear
speedup in terms of the number of agents and local updates. Our numerical
results highlight the superior communication efficiency of PISCO and its
resilience to data heterogeneity and various network topologies.
Related papers
- Performance-Aware Self-Configurable Multi-Agent Networks: A Distributed Submodular Approach for Simultaneous Coordination and Network Design [3.5527561584422465]
We present AlterNAting COordination and Network-Design Algorithm (Anaconda)
Anaconda is a scalable algorithm that also enjoys near-optimality guarantees.
We demonstrate in simulated scenarios of area monitoring and compare it with a state-of-the-art algorithm.
arXiv Detail & Related papers (2024-09-02T18:11:33Z) - Overlay-based Decentralized Federated Learning in Bandwidth-limited Networks [3.9162099309900835]
Decentralized federated learning (DFL) has the promise of boosting the deployment of artificial intelligence (AI) by directly learning across distributed agents without centralized coordination.
Most existing solutions were based on the simplistic assumption that neighboring agents are physically adjacent in the underlying communication network.
We jointly design the communication demands and the communication schedule for overlay-based DFL in bandwidth-limited networks without requiring explicit cooperation from the underlying network.
arXiv Detail & Related papers (2024-08-08T18:05:11Z) - Decentralized Learning over Wireless Networks with Broadcast-Based
Subgraph Sampling [36.99249604183772]
This work centers on the communication aspects of decentralized learning over wireless networks, using consensus-based decentralized descent (D-SGD)
Considering the actual communication cost or delay caused by in-network information exchange in an iterative process, our goal is to achieve fast convergence of the algorithm measured by improvement per transmission slot.
We propose BASS, an efficient communication framework for D-SGD over wireless networks with broadcast transmission and probabilistic subgraph sampling.
arXiv Detail & Related papers (2023-10-24T18:15:52Z) - Multi-Agent Reinforcement Learning Based on Representational
Communication for Large-Scale Traffic Signal Control [13.844458247041711]
Traffic signal control (TSC) is a challenging problem within intelligent transportation systems.
We propose a communication-based MARL framework for large-scale TSC.
Our framework allows each agent to learn a communication policy that dictates "which" part of the message is sent "to whom"
arXiv Detail & Related papers (2023-10-03T21:06:51Z) - Decentralized Learning over Wireless Networks: The Effect of Broadcast
with Random Access [56.91063444859008]
We investigate the impact of broadcast transmission and probabilistic random access policy on the convergence performance of D-SGD.
Our results demonstrate that optimizing the access probability to maximize the expected number of successful links is a highly effective strategy for accelerating the system convergence.
arXiv Detail & Related papers (2023-05-12T10:32:26Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Multi-agent Communication with Graph Information Bottleneck under
Limited Bandwidth (a position paper) [92.11330289225981]
In many real-world scenarios, communication can be expensive and the bandwidth of the multi-agent system is subject to certain constraints.
Redundant messages who occupy the communication resources can block the transmission of informative messages and thus jeopardize the performance.
We propose a novel multi-agent communication module, CommGIB, which effectively compresses the structure information and node information in the communication graph to deal with bandwidth-constrained settings.
arXiv Detail & Related papers (2021-12-20T07:53:44Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Communication-Efficient and Distributed Learning Over Wireless Networks:
Principles and Applications [55.65768284748698]
Machine learning (ML) is a promising enabler for the fifth generation (5G) communication systems and beyond.
This article aims to provide a holistic overview of relevant communication and ML principles, and thereby present communication-efficient and distributed learning frameworks with selected use cases.
arXiv Detail & Related papers (2020-08-06T12:37:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.