Diffusion Learning with Partial Agent Participation and Local Updates
- URL: http://arxiv.org/abs/2505.11307v1
- Date: Fri, 16 May 2025 14:33:49 GMT
- Title: Diffusion Learning with Partial Agent Participation and Local Updates
- Authors: Elsa Rizk, Kun Yuan, Ali H. Sayed,
- Abstract summary: Diffusion learning is a framework that endows edge devices with advanced intelligence.<n>This paper investigates an enhanced diffusion learning approach incorporating local updates and partial agent participation.<n>We prove that the resulting algorithm is stable in the mean-square error sense and provide a tight analysis of its Mean-Square-Deviation (MSD) performance.
- Score: 42.04873382667665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion learning is a framework that endows edge devices with advanced intelligence. By processing and analyzing data locally and allowing each agent to communicate with its immediate neighbors, diffusion effectively protects the privacy of edge devices, enables real-time response, and reduces reliance on central servers. However, traditional diffusion learning relies on communication at every iteration, leading to communication overhead, especially with large learning models. Furthermore, the inherent volatility of edge devices, stemming from power outages or signal loss, poses challenges to reliable communication between neighboring agents. To mitigate these issues, this paper investigates an enhanced diffusion learning approach incorporating local updates and partial agent participation. Local updates will curtail communication frequency, while partial agent participation will allow for the inclusion of agents based on their availability. We prove that the resulting algorithm is stable in the mean-square error sense and provide a tight analysis of its Mean-Square-Deviation (MSD) performance. Various numerical experiments are conducted to illustrate our theoretical findings.
Related papers
- A Single Merging Suffices: Recovering Server-based Learning Performance in Decentralized Learning [17.386971981099588]
We study how communication should be scheduled over time, including determining when and how frequently devices synchronize.<n>We find that fully connected communication at the final step, implemented by a single global merging, is sufficient to match the performance of server-based training.
arXiv Detail & Related papers (2025-07-09T04:56:56Z) - Collaborative Value Function Estimation Under Model Mismatch: A Federated Temporal Difference Analysis [55.13545823385091]
Federated reinforcement learning (FedRL) enables collaborative learning while preserving data privacy by preventing direct data exchange between agents.<n>In real-world applications, each agent may experience slightly different transition dynamics, leading to inherent model mismatches.<n>We show that even moderate levels of information sharing can significantly mitigate environment-specific errors.
arXiv Detail & Related papers (2025-03-21T18:06:28Z) - Distributed Event-Based Learning via ADMM [11.461617927469316]
We consider a global distributed learning problem, where agents minimize an objective function by exchanging information over a network.<n>Our approach has two distinct settings: (i) It substantially reduces communication by triggering communication only when necessary, and (ii) it is to the data-distribution among the different agents.
arXiv Detail & Related papers (2024-05-17T08:30:28Z) - Effective Intrusion Detection in Heterogeneous Internet-of-Things Networks via Ensemble Knowledge Distillation-based Federated Learning [52.6706505729803]
We introduce Federated Learning (FL) to collaboratively train a decentralized shared model of Intrusion Detection Systems (IDS)
FLEKD enables a more flexible aggregation method than conventional model fusion techniques.
Experiment results show that the proposed approach outperforms local training and traditional FL in terms of both speed and performance.
arXiv Detail & Related papers (2024-01-22T14:16:37Z) - Asynchronous Message-Passing and Zeroth-Order Optimization Based Distributed Learning with a Use-Case in Resource Allocation in Communication Networks [11.182443036683225]
Distributed learning and adaptation have received significant interest and found wide-ranging applications in machine learning signal processing.<n>This paper specifically focuses on a scenario where agents collaborate towards a common task.<n>Agents, acting as transmitters, collaboratively train their individual policies to maximize a global reward.
arXiv Detail & Related papers (2023-11-08T11:12:27Z) - Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - FedDec: Peer-to-peer Aided Federated Learning [15.952956981784219]
Federated learning (FL) has enabled training machine learning models exploiting the data of multiple agents without compromising privacy.
FL is known to be vulnerable to data heterogeneity, partial device participation, and infrequent communication with the server.
We present FedDec, an algorithm that interleaves peer-to-peer communication and parameter averaging between the local gradient updates of FL.
arXiv Detail & Related papers (2023-06-11T16:30:57Z) - Adversarial Attacks On Multi-Agent Communication [80.4392160849506]
Modern autonomous systems will soon be deployed at scale, opening up the possibility for cooperative multi-agent systems.
Such advantages rely heavily on communication channels which have been shown to be vulnerable to security breaches.
In this paper, we explore such adversarial attacks in a novel multi-agent setting where agents communicate by sharing learned intermediate representations.
arXiv Detail & Related papers (2021-01-17T00:35:26Z) - Learning to Communicate and Correct Pose Errors [75.03747122616605]
We study the setting proposed in V2VNet, where nearby self-driving vehicles jointly perform object detection and motion forecasting in a cooperative manner.
We propose a novel neural reasoning framework that learns to communicate, to estimate potential errors, and to reach a consensus about those errors.
arXiv Detail & Related papers (2020-11-10T18:19:40Z) - Distributed Inference with Sparse and Quantized Communication [7.155594644943642]
We consider the problem of distributed inference where agents in a network observe a stream of private signals generated by an unknown state.
We develop a novel event-triggered distributed learning rule that is based on the principle of diffusing low beliefs on each false hypothesis.
We show that by sequentially refining the range of the quantizers, every agent can learn the truth exponentially fast almost surely, while using just $1$ bit to encode its belief on each hypothesis.
arXiv Detail & Related papers (2020-04-02T23:08:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.