Decentralized Online Ensembles of Gaussian Processes for Multi-Agent Systems
- URL: http://arxiv.org/abs/2502.05301v1
- Date: Fri, 07 Feb 2025 20:10:09 GMT
- Title: Decentralized Online Ensembles of Gaussian Processes for Multi-Agent Systems
- Authors: Fernando Llorente, Daniel Waxman, Petar M. Djurić,
- Abstract summary: We introduce a fully decentralized, exact solution to computing the random feature approximation of Gaussian processes.
The resulting algorithm is tested against Bayesian and frequentist methods on simulated and real-world datasets.
- Score: 45.2233252981348
- License:
- Abstract: Flexible and scalable decentralized learning solutions are fundamentally important in the application of multi-agent systems. While several recent approaches introduce (ensembles of) kernel machines in the distributed setting, Bayesian solutions are much more limited. We introduce a fully decentralized, asymptotically exact solution to computing the random feature approximation of Gaussian processes. We further address the choice of hyperparameters by introducing an ensembling scheme for Bayesian multiple kernel learning based on online Bayesian model averaging. The resulting algorithm is tested against Bayesian and frequentist methods on simulated and real-world datasets.
Related papers
- Go With the Flow: Fast Diffusion for Gaussian Mixture Models [13.03355083378673]
Schr"odinger Bridges (SB) are diffusion processes that steer, in finite time, a given initial distribution to another final one while minimizing a suitable cost functional.
We propose latentmetrization of a set of SB policies for steering a system from one distribution to another.
We showcase the potential this approach in low-to-dimensional problems such as image-to-image translation in the space of an autoencoder.
arXiv Detail & Related papers (2024-12-12T08:40:22Z) - Random Aggregate Beamforming for Over-the-Air Federated Learning in Large-Scale Networks [66.18765335695414]
We consider a joint device selection and aggregate beamforming design with the objectives of minimizing the aggregate error and maximizing the number of selected devices.
To tackle the problems in a cost-effective manner, we propose a random aggregate beamforming-based scheme.
We additionally use analysis to study the obtained aggregate error and the number of the selected devices when the number of devices becomes large.
arXiv Detail & Related papers (2024-02-20T23:59:45Z) - Optimizing Solution-Samplers for Combinatorial Problems: The Landscape
of Policy-Gradient Methods [52.0617030129699]
We introduce a novel theoretical framework for analyzing the effectiveness of DeepMatching Networks and Reinforcement Learning methods.
Our main contribution holds for a broad class of problems including Max-and Min-Cut, Max-$k$-Bipartite-Bi, Maximum-Weight-Bipartite-Bi, and Traveling Salesman Problem.
As a byproduct of our analysis we introduce a novel regularization process over vanilla descent and provide theoretical and experimental evidence that it helps address vanishing-gradient issues and escape bad stationary points.
arXiv Detail & Related papers (2023-10-08T23:39:38Z) - Sparsity-Aware Distributed Learning for Gaussian Processes with Linear Multiple Kernel [20.98449975854329]
This paper presents a novel GP linear multiple kernel (LMK) and a generic sparsity-aware distributed learning framework to optimize the hyper- parameters.
The newly proposed grid spectral mixture product (GSMP) kernel is tailored for multi-dimensional data.
To exploit the inherent sparsity of the solutions, we introduce the Sparse LInear Multiple Kernel Learning (SLIM-KL) framework.
arXiv Detail & Related papers (2023-09-15T07:05:33Z) - An Online Multiple Kernel Parallelizable Learning Scheme [6.436174170552483]
We propose a learning scheme that scalably combines several single kernel-based online methods to reduce the kernel-selection bias.
The proposed learning scheme applies to any task formulated as a regularized empirical risk minimization convex problem.
arXiv Detail & Related papers (2023-08-19T20:15:02Z) - Fully Decentralized, Scalable Gaussian Processes for Multi-Agent
Federated Learning [14.353574903736343]
We propose decentralized and scalable algorithms for GP training and prediction in multi-agent systems.
The efficacy of the proposed methods is illustrated with numerical experiments on synthetic and real data.
arXiv Detail & Related papers (2022-03-06T02:54:13Z) - Efficient Model-Based Multi-Agent Mean-Field Reinforcement Learning [89.31889875864599]
We propose an efficient model-based reinforcement learning algorithm for learning in multi-agent systems.
Our main theoretical contributions are the first general regret bounds for model-based reinforcement learning for MFC.
We provide a practical parametrization of the core optimization problem.
arXiv Detail & Related papers (2021-07-08T18:01:02Z) - Coded Stochastic ADMM for Decentralized Consensus Optimization with Edge
Computing [113.52575069030192]
Big data, including applications with high security requirements, are often collected and stored on multiple heterogeneous devices, such as mobile devices, drones and vehicles.
Due to the limitations of communication costs and security requirements, it is of paramount importance to extract information in a decentralized manner instead of aggregating data to a fusion center.
We consider the problem of learning model parameters in a multi-agent system with data locally processed via distributed edge nodes.
A class of mini-batch alternating direction method of multipliers (ADMM) algorithms is explored to develop the distributed learning model.
arXiv Detail & Related papers (2020-10-02T10:41:59Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.