Lightweight Distributed Gaussian Process Regression for Online Machine
Learning
- URL: http://arxiv.org/abs/2105.04738v5
- Date: Mon, 10 Jul 2023 03:26:38 GMT
- Title: Lightweight Distributed Gaussian Process Regression for Online Machine
Learning
- Authors: Zhenyuan Yuan, Minghui Zhu
- Abstract summary: Group of agents aim to collaboratively learn a common static latent function through streaming data.
We propose a lightweight distributed Gaussian process regression (GPR) algorithm that is cognizant of agents' limited capabilities in communication, computation and memory.
- Score: 2.0305676256390934
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we study the problem where a group of agents aim to
collaboratively learn a common static latent function through streaming data.
We propose a lightweight distributed Gaussian process regression (GPR)
algorithm that is cognizant of agents' limited capabilities in communication,
computation and memory. Each agent independently runs agent-based GPR using
local streaming data to predict test points of interest; then the agents
collaboratively execute distributed GPR to obtain global predictions over a
common sparse set of test points; finally, each agent fuses results from
distributed GPR with agent-based GPR to refine its predictions. By quantifying
the transient and steady-state performances in predictive variance and error,
we show that limited inter-agent communication improves learning performances
in the sense of Pareto. Monte Carlo simulation is conducted to evaluate the
developed algorithm.
Related papers
- Distributed Event-Based Learning via ADMM [11.461617927469316]
We consider a distributed learning problem, where agents minimize a global objective function by exchanging information over a network.
Our approach has two distinct features: (i) It substantially reduces communication by triggering communication only when necessary, and (ii) it is agnostic to the data-distribution among the different agents.
arXiv Detail & Related papers (2024-05-17T08:30:28Z) - Distribution Shift Inversion for Out-of-Distribution Prediction [57.22301285120695]
We propose a portable Distribution Shift Inversion algorithm for Out-of-Distribution (OoD) prediction.
We show that our method provides a general performance gain when plugged into a wide range of commonly used OoD algorithms.
arXiv Detail & Related papers (2023-06-14T08:00:49Z) - Distributed Learning over Networks with Graph-Attention-Based
Personalization [49.90052709285814]
We propose a graph-based personalized algorithm (GATTA) for distributed deep learning.
In particular, the personalized model in each agent is composed of a global part and a node-specific part.
By treating each agent as one node in a graph the node-specific parameters as its features, the benefits of the graph attention mechanism can be inherited.
arXiv Detail & Related papers (2023-05-22T13:48:30Z) - Compressed Regression over Adaptive Networks [58.79251288443156]
We derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem.
We devise an optimized allocation strategy where the parameters necessary for the optimization can be learned online by the agents.
arXiv Detail & Related papers (2023-04-07T13:41:08Z) - Federated Learning for Heterogeneous Bandits with Unobserved Contexts [0.0]
We study the problem of federated multi-arm contextual bandits with unknown contexts.
We propose an elimination-based algorithm and prove the regret bound for linearly parametrized reward functions.
arXiv Detail & Related papers (2023-03-29T22:06:24Z) - IPCC-TP: Utilizing Incremental Pearson Correlation Coefficient for Joint
Multi-Agent Trajectory Prediction [73.25645602768158]
IPCC-TP is a novel relevance-aware module based on Incremental Pearson Correlation Coefficient to improve multi-agent interaction modeling.
Our module can be conveniently embedded into existing multi-agent prediction methods to extend original motion distribution decoders.
arXiv Detail & Related papers (2023-03-01T15:16:56Z) - Asynchronous Bayesian Learning over a Network [18.448653247778143]
We present a practical asynchronous data fusion model for networked agents to perform distributed Bayesian learning without sharing raw data.
Our algorithm uses a gossip-based approach where pairs of randomly selected agents employ unadjusted Langevin dynamics for parameter sampling.
We introduce an event-triggered mechanism to further reduce communication between gossiping agents.
arXiv Detail & Related papers (2022-11-16T01:21:36Z) - Distributed Cooperative Multi-Agent Reinforcement Learning with Directed
Coordination Graph [18.04270684579841]
Existing distributed cooperative multi-agent reinforcement learning (MARL) frameworks assume undirected coordination graphs and communication graphs.
We propose a distributed RL algorithm where the local policy evaluations are based on local value functions.
arXiv Detail & Related papers (2022-01-10T04:14:46Z) - Learning-based Measurement Scheduling for Loosely-Coupled Cooperative
Localization [3.616948583169635]
In cooperative localization, communicating mobile agents use inter-agent relative measurements to improve their dead-reckoning-based global localization.
Measurement scheduling enables an agent to decide which subset of available inter-agent relative measurements it should process when its computational resources are limited.
This paper proposes a measurement scheduling for CL that follows the sequential computation approach but reduces the communication and cost by using a neural network-based surrogate model as a proxy for the SG's merit function.
arXiv Detail & Related papers (2021-12-06T08:06:29Z) - Implicit Distributional Reinforcement Learning [61.166030238490634]
implicit distributional actor-critic (IDAC) built on two deep generator networks (DGNs)
Semi-implicit actor (SIA) powered by a flexible policy distribution.
We observe IDAC outperforms state-of-the-art algorithms on representative OpenAI Gym environments.
arXiv Detail & Related papers (2020-07-13T02:52:18Z) - Model-based Reinforcement Learning for Decentralized Multiagent
Rendezvous [66.6895109554163]
Underlying the human ability to align goals with other agents is their ability to predict the intentions of others and actively update their own plans.
We propose hierarchical predictive planning (HPP), a model-based reinforcement learning method for decentralized multiagent rendezvous.
arXiv Detail & Related papers (2020-03-15T19:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.