Whom to Trust? Elective Learning for Distributed Gaussian Process
Regression
- URL: http://arxiv.org/abs/2402.03014v1
- Date: Mon, 5 Feb 2024 13:52:56 GMT
- Title: Whom to Trust? Elective Learning for Distributed Gaussian Process
Regression
- Authors: Zewen Yang, Xiaobing Dai, Akshat Dubey, Sandra Hirche, Georges Hattab
- Abstract summary: We develop an elective learning algorithm, namely prior-aware elective distributed GP (Pri-GP)
Pri-GP empowers agents with the capability to selectively request predictions from neighboring agents based on their trustworthiness.
We establish a prediction error bound within the Pri-GP framework, ensuring the reliability of predictions.
- Score: 3.5208783730894972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces an innovative approach to enhance distributed
cooperative learning using Gaussian process (GP) regression in multi-agent
systems (MASs). The key contribution of this work is the development of an
elective learning algorithm, namely prior-aware elective distributed GP
(Pri-GP), which empowers agents with the capability to selectively request
predictions from neighboring agents based on their trustworthiness. The
proposed Pri-GP effectively improves individual prediction accuracy, especially
in cases where the prior knowledge of an agent is incorrect. Moreover, it
eliminates the need for computationally intensive variance calculations for
determining aggregation weights in distributed GP. Furthermore, we establish a
prediction error bound within the Pri-GP framework, ensuring the reliability of
predictions, which is regarded as a crucial property in safety-critical MAS
applications.
Related papers
- Online scalable Gaussian processes with conformal prediction for guaranteed coverage [32.21093722162573]
The consistency of the resulting uncertainty values hinges on the premise that the learning function conforms to the properties specified by the GP model.
We propose to wed the GP with the prevailing conformal prediction (CP), a distribution-free post-processing framework that produces it prediction sets with a provably valid coverage.
arXiv Detail & Related papers (2024-10-07T19:22:15Z) - Aggregation Models with Optimal Weights for Distributed Gaussian Processes [6.408773096179187]
We propose a novel approach for aggregated prediction in distributed GPs.
The proposed method incorporates correlations among experts, leading to better prediction accuracy with manageable computational requirements.
As demonstrated by empirical studies, the proposed approach results in more stable predictions in less time than state-of-the-art consistent aggregation models.
arXiv Detail & Related papers (2024-08-01T23:32:14Z) - Guaranteed Coverage Prediction Intervals with Gaussian Process Regression [0.6993026261767287]
This paper introduces an extension of GPR based on a Machine Learning framework called, Conformal Prediction (CP)
This extension guarantees the production of PIs with the required coverage even when the model is completely misspecified.
arXiv Detail & Related papers (2023-10-24T08:59:40Z) - Distributionally Robust Machine Learning with Multi-source Data [6.383451076043423]
We introduce a group distributionally robust prediction model to optimize an adversarial reward about explained variance with respect to a class of target distributions.
Compared to classical empirical risk minimization, the proposed robust prediction model improves the prediction accuracy for target populations with distribution shifts.
We demonstrate the performance of our proposed group distributionally robust method on simulated and real data with random forests and neural networks as base-learning algorithms.
arXiv Detail & Related papers (2023-09-05T13:19:40Z) - Conformal Prediction for Federated Uncertainty Quantification Under
Label Shift [57.54977668978613]
Federated Learning (FL) is a machine learning framework where many clients collaboratively train models.
We develop a new conformal prediction method based on quantile regression and take into account privacy constraints.
arXiv Detail & Related papers (2023-06-08T11:54:58Z) - Probable Domain Generalization via Quantile Risk Minimization [90.15831047587302]
Domain generalization seeks predictors which perform well on unseen test distributions.
We propose a new probabilistic framework for DG where the goal is to learn predictors that perform well with high probability.
arXiv Detail & Related papers (2022-07-20T14:41:09Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - Gaussian Processes to speed up MCMC with automatic
exploratory-exploitation effect [1.0742675209112622]
We present a two-stage Metropolis-Hastings algorithm for sampling probabilistic models.
The key feature of the approach is the ability to learn the target distribution from scratch while sampling.
arXiv Detail & Related papers (2021-09-28T17:43:25Z) - Predicting Deep Neural Network Generalization with Perturbation Response
Curves [58.8755389068888]
We propose a new framework for evaluating the generalization capabilities of trained networks.
Specifically, we introduce two new measures for accurately predicting generalization gaps.
We attain better predictive scores than the current state-of-the-art measures on a majority of tasks in the Predicting Generalization in Deep Learning (PGDL) NeurIPS 2020 competition.
arXiv Detail & Related papers (2021-06-09T01:37:36Z) - Probabilistic electric load forecasting through Bayesian Mixture Density
Networks [70.50488907591463]
Probabilistic load forecasting (PLF) is a key component in the extended tool-chain required for efficient management of smart energy grids.
We propose a novel PLF approach, framed on Bayesian Mixture Density Networks.
To achieve reliable and computationally scalable estimators of the posterior distributions, both Mean Field variational inference and deep ensembles are integrated.
arXiv Detail & Related papers (2020-12-23T16:21:34Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.