Event-Triggered Gossip for Distributed Learning
- URL: http://arxiv.org/abs/2602.19116v1
- Date: Sun, 22 Feb 2026 10:13:43 GMT
- Title: Event-Triggered Gossip for Distributed Learning
- Authors: Zhiyuan Zhai, Xiaojun Yuan, Wei Ni, Xin Wang, Rui Zhang, Geoffrey Ye Li,
- Abstract summary: We develop a new event-triggered gossip framework for distributed learning to reduce inter-node communication.<n>We analyze bf71.61% with only a marginal performance loss, compared with the conventional full-text-of-the-art distributed learning methods.
- Score: 61.70659996356528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While distributed learning offers a new learning paradigm for distributed network with no central coordination, it is constrained by communication bottleneck between nodes. We develop a new event-triggered gossip framework for distributed learning to reduce inter-node communication overhead. The framework introduces an adaptive communication control mechanism that enables each node to autonomously decide in a fully decentralized fashion when to exchange model information with its neighbors based on local model deviations. We analyze the ergodic convergence of the proposed framework under noconvex objectives and interpret the convergence guarantees under different triggering conditions. Simulation results show that the proposed framework achieves substantially lower communication overhead than the state-of-the-art distributed learning methods, reducing cumulative point-to-point transmissions by \textbf{71.61\%} with only a marginal performance loss, compared with the conventional full-communication baseline.
Related papers
- Local adapt-then-combine algorithms for distributed nonsmooth optimization: Achieving provable communication acceleration [50.67878993903822]
We propose a communication-efficient Adapt-Then-Combine (ATC) framework, FlexATC, unifying numerous ATC-based distributed algorithms.<n>We show for the first time that local updates provably lead to communication acceleration for ATC-based distributed algorithms.
arXiv Detail & Related papers (2026-02-18T02:47:05Z) - Stabilizing Decentralized Federated Fine-Tuning via Topology-Aware Alternating LoRA [20.00589625873043]
textttTAD-LoRA is a serverless variant of federated learning.<n>We show that textttTAD-LoRA is competitive in strongly connected topologies and delivers clear gains under moderately and weakly connected topologies.
arXiv Detail & Related papers (2026-01-31T01:57:53Z) - On the Surprising Effectiveness of a Single Global Merging in Decentralized Learning [26.210904912836355]
We study how communication should be scheduled over time to improve global generalization.<n>We find that fully connected communication at the final step, implemented by a single global merging, can significant improve the generalization performance.
arXiv Detail & Related papers (2025-07-09T04:56:56Z) - Latent Diffusion Model Based Denoising Receiver for 6G Semantic Communication: From Stochastic Differential Theory to Application [11.385703484113552]
We propose a novel semantic communication framework empowered by generative artificial intelligence (GAI)<n>A latent diffusion model (LDM)-based semantic communication framework is proposed that combines a variational autoencoder for semantic features extraction.<n>The proposed system is a training-free framework that supports zero-shot generalization, and achieves superior performance under low-SNR and out-of-distribution conditions.
arXiv Detail & Related papers (2025-06-06T03:20:32Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Communication-Efficient Zeroth-Order Distributed Online Optimization:
Algorithm, Theory, and Applications [9.045332526072828]
This paper focuses on a multi-agent zeroth-order online optimization problem in a federated learning setting for target tracking.
The proposed solution is further analyzed in terms of errors and errors in two relevant applications.
arXiv Detail & Related papers (2023-06-09T03:51:45Z) - Networked Communication for Decentralised Agents in Mean-Field Games [59.01527054553122]
We introduce networked communication to the mean-field game framework.<n>We prove that our architecture has sample guarantees bounded between those of the centralised- and independent-learning cases.<n>We show that our networked approach has significant advantages over both alternatives in terms of robustness to update failures and to changes in population size.
arXiv Detail & Related papers (2023-06-05T10:45:39Z) - Fundamental Limits of Communication Efficiency for Model Aggregation in
Distributed Learning: A Rate-Distortion Approach [54.311495894129585]
We study the limit of communication cost of model aggregation in distributed learning from a rate-distortion perspective.
It is found that the communication gain by exploiting the correlation between worker nodes is significant for SignSGD.
arXiv Detail & Related papers (2022-06-28T13:10:40Z) - Decentralized Event-Triggered Federated Learning with Heterogeneous
Communication Thresholds [12.513477328344255]
We propose a novel methodology for distributed model aggregations via asynchronous, event-triggered consensus iterations over a network graph topology.
We demonstrate that our methodology achieves the globally optimal learning model under standard assumptions in distributed learning and graph consensus literature.
arXiv Detail & Related papers (2022-04-07T20:35:37Z) - Finite-Time Consensus Learning for Decentralized Optimization with
Nonlinear Gossiping [77.53019031244908]
We present a novel decentralized learning framework based on nonlinear gossiping (NGO), that enjoys an appealing finite-time consensus property to achieve better synchronization.
Our analysis on how communication delay and randomized chats affect learning further enables the derivation of practical variants.
arXiv Detail & Related papers (2021-11-04T15:36:25Z) - Distributed ADMM with Synergetic Communication and Computation [39.930150618785355]
We propose a novel distributed alternating direction method of multipliers (ADMM) algorithm with synergetic communication and computation.
In the proposed algorithm, each node interacts with only part of its neighboring nodes, the number of which is progressively determined according to a searching procedure.
We prove the convergence of the proposed algorithm and provide an upper bound of the convergence variance brought by randomness.
arXiv Detail & Related papers (2020-09-29T08:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.