Local Stochastic Approximation: A Unified View of Federated Learning and
Distributed Multi-Task Reinforcement Learning Algorithms
- URL: http://arxiv.org/abs/2006.13460v1
- Date: Wed, 24 Jun 2020 04:05:11 GMT
- Title: Local Stochastic Approximation: A Unified View of Federated Learning and
Distributed Multi-Task Reinforcement Learning Algorithms
- Authors: Thinh T. Doan
- Abstract summary: We study local approximation over a network of agents, where their goal is to find the root of an operator composed of the local operators at the agents.
Our focus is to characterize the finite-time performance of this method when the data at each agent are generated from Markov processes, and hence they are dependent.
- Score: 1.52292571922932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by broad applications in reinforcement learning and federated
learning, we study local stochastic approximation over a network of agents,
where their goal is to find the root of an operator composed of the local
operators at the agents. Our focus is to characterize the finite-time
performance of this method when the data at each agent are generated from
Markov processes, and hence they are dependent. In particular, we provide the
convergence rates of local stochastic approximation for both constant and
time-varying step sizes. Our results show that these rates are within a
logarithmic factor of the ones under independent data. We then illustrate the
applications of these results to different interesting problems in multi-task
reinforcement learning and federated learning.
Related papers
- Reinforcement Learning Based Multi-modal Feature Fusion Network for
Novel Class Discovery [47.28191501836041]
In this paper, we employ a Reinforcement Learning framework to simulate the cognitive processes of humans.
We also deploy a Member-to-Leader Multi-Agent framework to extract and fuse features from multi-modal information.
We demonstrate the performance of our approach in both the 3D and 2D domains by employing the OS-MN40, OS-MN40-Miss, and Cifar10 datasets.
arXiv Detail & Related papers (2023-08-26T07:55:32Z) - Distributed Estimation of Sparse Inverse Covariance Matrices [0.7832189413179361]
We propose a distributed sparse inverse covariance algorithm to learn the network structure in real-time from data collected by distributed agents.
Our approach is built on an online graphical alternating minimization algorithm, augmented with a consensus term that allows agents to learn the desired structure cooperatively.
arXiv Detail & Related papers (2021-09-24T15:26:41Z) - Decentralized Local Stochastic Extra-Gradient for Variational
Inequalities [125.62877849447729]
We consider distributed variational inequalities (VIs) on domains with the problem data that is heterogeneous (non-IID) and distributed across many devices.
We make a very general assumption on the computational network that covers the settings of fully decentralized calculations.
We theoretically analyze its convergence rate in the strongly-monotone, monotone, and non-monotone settings.
arXiv Detail & Related papers (2021-06-15T17:45:51Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z) - Finite-Time Convergence Rates of Decentralized Stochastic Approximation
with Applications in Multi-Agent and Multi-Task Learning [16.09467599829253]
We study a data-driven approach for finding the root of an operator under noisy measurements.
A network of agents, each with its own operator and data observations, cooperatively find the fixed point of the aggregate operator over a decentralized communication graph.
Our main contribution is to provide a finite-time analysis of this decentralized approximation method when the data observed at each agent are sampled from a Markov process.
arXiv Detail & Related papers (2020-10-28T17:01:54Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - A Decentralized Approach to Bayesian Learning [26.74338464389837]
Motivated by decentralized approaches to machine learning, we propose a collaborative learning taking the form of decentralized Langevin dynamics.
Our analysis show that the initial KL-divergence between the Markov Chain and the target posterior distribution is exponentially decreasing.
The performance of individual agents with locally available data is on par with the centralized setting with considerable improvement in the rate.
arXiv Detail & Related papers (2020-07-14T03:59:17Z) - Multi-Agent Reinforcement Learning in Stochastic Networked Systems [30.78949372661673]
We study multi-agent reinforcement learning (MARL) in a network of agents.
The objective is to find localized policies that maximize the (discounted) global reward.
arXiv Detail & Related papers (2020-06-11T16:08:16Z) - Randomized Entity-wise Factorization for Multi-Agent Reinforcement
Learning [59.62721526353915]
Multi-agent settings in the real world often involve tasks with varying types and quantities of agents and non-agent entities.
Our method aims to leverage these commonalities by asking the question: What is the expected utility of each agent when only considering a randomly selected sub-group of its observed entities?''
arXiv Detail & Related papers (2020-06-07T18:28:41Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.