Asynchronous Diffusion Learning with Agent Subsampling and Local Updates
- URL: http://arxiv.org/abs/2402.05529v1
- Date: Thu, 8 Feb 2024 10:07:30 GMT
- Title: Asynchronous Diffusion Learning with Agent Subsampling and Local Updates
- Authors: Elsa Rizk, Kun Yuan, Ali H. Sayed
- Abstract summary: We investigate a network of agents operating asynchronously, aiming to discover an ideal global model that suits individual local datasets.
We prove that the resulting asynchronous diffusion strategy is stable in the mean-square error sense.
- Score: 47.25856291277345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we examine a network of agents operating asynchronously, aiming
to discover an ideal global model that suits individual local datasets. Our
assumption is that each agent independently chooses when to participate
throughout the algorithm and the specific subset of its neighbourhood with
which it will cooperate at any given moment. When an agent chooses to take
part, it undergoes multiple local updates before conveying its outcomes to the
sub-sampled neighbourhood. Under this setup, we prove that the resulting
asynchronous diffusion strategy is stable in the mean-square error sense and
provide performance guarantees specifically for the federated learning setting.
We illustrate the findings with numerical simulations.
Related papers
- AgentOhana: Design Unified Data and Training Pipeline for Effective Agent Learning [100.14685774661959]
textbfAgentOhana aggregates agent trajectories from distinct environments, spanning a wide array of scenarios.
textbfxLAM-v0.1, a large action model tailored for AI agents, demonstrates exceptional performance across various benchmarks.
arXiv Detail & Related papers (2024-02-23T18:56:26Z) - Causal Coordinated Concurrent Reinforcement Learning [8.654978787096807]
We propose a novel algorithmic framework for data sharing and coordinated exploration for the purpose of learning more data-efficient and better performing policies under a concurrent reinforcement learning setting.
Our algorithm leverages a causal inference algorithm in the form of Additive Noise Model - Mixture Model (ANM-MM) in extracting model parameters governing individual differentials via independence enforcement.
We propose a new data sharing scheme based on a similarity measure of the extracted model parameters and demonstrate superior learning speeds on a set of autoregressive, pendulum and cart-pole swing-up tasks.
arXiv Detail & Related papers (2024-01-31T17:20:28Z) - Zeroth-order Asynchronous Learning with Bounded Delays with a Use-case
in Resource Allocation in Communication Networks [12.216015676346032]
This paper focuses on a scenario where agents collaborate toward a unified mission while potentially having distinct tasks.
Within this context, the objective for the agents is to optimize their local parameters based on the aggregate of local reward functions.
This paper presents theoretical convergence analyses and establishes a convergence rate for the proposed approach.
arXiv Detail & Related papers (2023-11-08T11:12:27Z) - Time-series Generation by Contrastive Imitation [87.51882102248395]
We study a generative framework that seeks to combine the strengths of both: Motivated by a moment-matching objective to mitigate compounding error, we optimize a local (but forward-looking) transition policy.
At inference, the learned policy serves as the generator for iterative sampling, and the learned energy serves as a trajectory-level measure for evaluating sample quality.
arXiv Detail & Related papers (2023-11-02T16:45:25Z) - Federated Learning for Heterogeneous Bandits with Unobserved Contexts [0.0]
We study the problem of federated multi-arm contextual bandits with unknown contexts.
We propose an elimination-based algorithm and prove the regret bound for linearly parametrized reward functions.
arXiv Detail & Related papers (2023-03-29T22:06:24Z) - Federated Reinforcement Learning: Linear Speedup Under Markovian
Sampling [17.943014287720395]
We consider a federated reinforcement learning framework where multiple agents collaboratively learn a global model.
We propose federated versions of on-policy TD, off-policy TD and Q-learning, and analyze their convergence.
We are the first to consider Markovian noise and multiple local updates, and prove a linear convergence speedup with respect to the number of agents.
arXiv Detail & Related papers (2022-06-21T08:39:12Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - Federated Learning under Importance Sampling [49.17137296715029]
We study the effect of importance sampling and devise schemes for sampling agents and data non-uniformly guided by a performance measure.
We find that in schemes involving sampling without replacement, the performance of the resulting architecture is controlled by two factors related to data variability at each agent.
arXiv Detail & Related papers (2020-12-14T10:08:55Z) - Local Stochastic Approximation: A Unified View of Federated Learning and
Distributed Multi-Task Reinforcement Learning Algorithms [1.52292571922932]
We study local approximation over a network of agents, where their goal is to find the root of an operator composed of the local operators at the agents.
Our focus is to characterize the finite-time performance of this method when the data at each agent are generated from Markov processes, and hence they are dependent.
arXiv Detail & Related papers (2020-06-24T04:05:11Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.