Fully Decentralized, Scalable Gaussian Processes for Multi-Agent
Federated Learning
- URL: http://arxiv.org/abs/2203.02865v1
- Date: Sun, 6 Mar 2022 02:54:13 GMT
- Title: Fully Decentralized, Scalable Gaussian Processes for Multi-Agent
Federated Learning
- Authors: George P. Kontoudis, Daniel J. Stilwell
- Abstract summary: We propose decentralized and scalable algorithms for GP training and prediction in multi-agent systems.
The efficacy of the proposed methods is illustrated with numerical experiments on synthetic and real data.
- Score: 14.353574903736343
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose decentralized and scalable algorithms for Gaussian
process (GP) training and prediction in multi-agent systems. To decentralize
the implementation of GP training optimization algorithms, we employ the
alternating direction method of multipliers (ADMM). A closed-form solution of
the decentralized proximal ADMM is provided for the case of GP hyper-parameter
training with maximum likelihood estimation. Multiple aggregation techniques
for GP prediction are decentralized with the use of iterative and consensus
methods. In addition, we propose a covariance-based nearest neighbor selection
strategy that enables a subset of agents to perform predictions. The efficacy
of the proposed methods is illustrated with numerical experiments on synthetic
and real data.
Related papers
- Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - On Large-Scale Multiple Testing Over Networks: An Asymptotic Approach [2.3072402651280517]
This work concerns developing communication- and computation-efficient methods for large-scale multiple testing over networks.
We take an approach and propose two methods, proportion-matching and greedy aggregation, tailored to distributed settings.
For both methods, we provide the rate of convergence for both the FDR and power.
arXiv Detail & Related papers (2022-11-29T10:10:39Z) - Towards Global Optimality in Cooperative MARL with the Transformation
And Distillation Framework [26.612749327414335]
Decentralized execution is one core demand in cooperative multi-agent reinforcement learning (MARL)
In this paper, we theoretically analyze two common classes of algorithms with decentralized policies -- multi-agent policy gradient methods and value-decomposition methods.
We show that TAD-PPO can theoretically perform optimal policy learning in the finite multi-agent MDPs and shows significant outperformance on a large set of cooperative multi-agent tasks.
arXiv Detail & Related papers (2022-07-12T06:59:13Z) - Gaussian Processes to speed up MCMC with automatic
exploratory-exploitation effect [1.0742675209112622]
We present a two-stage Metropolis-Hastings algorithm for sampling probabilistic models.
The key feature of the approach is the ability to learn the target distribution from scratch while sampling.
arXiv Detail & Related papers (2021-09-28T17:43:25Z) - Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in
Edge Industrial IoT [106.83952081124195]
Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes.
We propose an adaptive ADMM (asI-ADMM) algorithm and apply it to decentralized RL with edge-computing-empowered IIoT networks.
Experiment results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability, and can well adapt to complex IoT environments.
arXiv Detail & Related papers (2021-06-30T16:49:07Z) - Permutation Invariant Policy Optimization for Mean-Field Multi-Agent
Reinforcement Learning: A Principled Approach [128.62787284435007]
We propose the mean-field proximal policy optimization (MF-PPO) algorithm, at the core of which is a permutation-invariant actor-critic neural architecture.
We prove that MF-PPO attains the globally optimal policy at a sublinear rate of convergence.
In particular, we show that the inductive bias introduced by the permutation-invariant neural architecture enables MF-PPO to outperform existing competitors.
arXiv Detail & Related papers (2021-05-18T04:35:41Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - MAGMA: Inference and Prediction with Multi-Task Gaussian Processes [4.368185344922342]
A novel multi-task Gaussian process (GP) framework is proposed, by using a common mean process for sharing information across tasks.
Our overall algorithm is called textscMagma (standing for Multi tAsk Gaussian processes with common MeAn)
arXiv Detail & Related papers (2020-07-21T11:43:54Z) - Decentralized MCTS via Learned Teammate Models [89.24858306636816]
We present a trainable online decentralized planning algorithm based on decentralized Monte Carlo Tree Search.
We show that deep learning and convolutional neural networks can be employed to produce accurate policy approximators.
arXiv Detail & Related papers (2020-03-19T13:10:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.