Byzantine-resilient federated online learning for Gaussian process regression
- URL: http://arxiv.org/abs/2507.14021v1
- Date: Fri, 18 Jul 2025 15:39:47 GMT
- Title: Byzantine-resilient federated online learning for Gaussian process regression
- Authors: Xu Zhang, Zhenyuan Yuan, Minghui Zhu,
- Abstract summary: We develop a Byzantine-resilient federated GPR algorithm that allows a cloud and a group of agents to collaboratively learn a latent function.<n>Agent-based fused GPR refines local predictions by fusing the received global model with that of the agent-based local GPR.<n>We quantify the learning accuracy improvements of the agent-based fused GPR over the agent-based local GPR.
- Score: 10.8159638645264
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study Byzantine-resilient federated online learning for Gaussian process regression (GPR). We develop a Byzantine-resilient federated GPR algorithm that allows a cloud and a group of agents to collaboratively learn a latent function and improve the learning performances where some agents exhibit Byzantine failures, i.e., arbitrary and potentially adversarial behavior. Each agent-based local GPR sends potentially compromised local predictions to the cloud, and the cloud-based aggregated GPR computes a global model by a Byzantine-resilient product of experts aggregation rule. Then the cloud broadcasts the current global model to all the agents. Agent-based fused GPR refines local predictions by fusing the received global model with that of the agent-based local GPR. Moreover, we quantify the learning accuracy improvements of the agent-based fused GPR over the agent-based local GPR. Experiments on a toy example and two medium-scale real-world datasets are conducted to demonstrate the performances of the proposed algorithm.
Related papers
- Linguistic Fuzzy Information Evolution with Random Leader Election Mechanism for Decision-Making Systems [58.67035332062508]
Linguistic fuzzy information evolution is crucial in understanding information exchange among agents.
Different agent weights may lead to different convergence results in the classic DeGroot model.
This paper proposes three new models of linguistic fuzzy information dynamics.
arXiv Detail & Related papers (2024-10-19T18:15:24Z) - Beyond Local Views: Global State Inference with Diffusion Models for Cooperative Multi-Agent Reinforcement Learning [36.25611963252774]
State Inference with Diffusion Models (SIDIFF) is inspired by image outpainting.
SIDIFF reconstructs the original global state based solely on local observations.
It can be effortlessly incorporated into current multi-agent reinforcement learning algorithms.
arXiv Detail & Related papers (2024-08-18T14:49:53Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Federated Learning as Variational Inference: A Scalable Expectation
Propagation Approach [66.9033666087719]
This paper extends the inference view and describes a variational inference formulation of federated learning.
We apply FedEP on standard federated learning benchmarks and find that it outperforms strong baselines in terms of both convergence speed and accuracy.
arXiv Detail & Related papers (2023-02-08T17:58:11Z) - Beyond ADMM: A Unified Client-variance-reduced Adaptive Federated
Learning Framework [82.36466358313025]
We propose a primal-dual FL algorithm, termed FedVRA, that allows one to adaptively control the variance-reduction level and biasness of the global model.
Experiments based on (semi-supervised) image classification tasks demonstrate superiority of FedVRA over the existing schemes.
arXiv Detail & Related papers (2022-12-03T03:27:51Z) - Locally Smoothed Gaussian Process Regression [11.45660271015251]
We develop a novel framework to accelerate Gaussian process regression (GPR)
In particular, we consider localization kernels at each data point to down-weigh the contributions from other data points that are far away.
We demonstrate the competitive performance of the proposed approach compared to full GPR, other localized models, and deep Gaussian processes.
arXiv Detail & Related papers (2022-10-18T17:04:35Z) - Federated Stochastic Approximation under Markov Noise and Heterogeneity: Applications in Reinforcement Learning [24.567125948995834]
Federated reinforcement learning is a framework in which $N$ agents collaboratively learn a global model.
We show that by careful collaboration of the agents in solving this joint fixed point problem, we can find the global model $N$ times faster.
arXiv Detail & Related papers (2022-06-21T08:39:12Z) - Multi-model Ensemble Analysis with Neural Network Gaussian Processes [5.975698284186638]
Multi-model ensemble analysis integrates information from multiple climate models into a unified projection.
We propose a statistical approach, called NN-GPR, using a deep neural network based covariance function.
Experiments show that NN-GPR can be highly skillful at surface temperature and precipitation forecasting.
arXiv Detail & Related papers (2022-02-08T21:28:03Z) - Lightweight Distributed Gaussian Process Regression for Online Machine
Learning [2.0305676256390934]
Group of agents aim to collaboratively learn a common static latent function through streaming data.
We propose a lightweight distributed Gaussian process regression (GPR) algorithm that is cognizant of agents' limited capabilities in communication, computation and memory.
arXiv Detail & Related papers (2021-05-11T01:13:22Z) - Distributed Q-Learning with State Tracking for Multi-agent Networked
Control [61.63442612938345]
This paper studies distributed Q-learning for Linear Quadratic Regulator (LQR) in a multi-agent network.
We devise a state tracking (ST) based Q-learning algorithm to design optimal controllers for agents.
arXiv Detail & Related papers (2020-12-22T22:03:49Z) - Model-based Reinforcement Learning for Decentralized Multiagent
Rendezvous [66.6895109554163]
Underlying the human ability to align goals with other agents is their ability to predict the intentions of others and actively update their own plans.
We propose hierarchical predictive planning (HPP), a model-based reinforcement learning method for decentralized multiagent rendezvous.
arXiv Detail & Related papers (2020-03-15T19:49:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.