Incremental Ensemble Gaussian Processes
- URL: http://arxiv.org/abs/2110.06777v1
- Date: Wed, 13 Oct 2021 15:11:25 GMT
- Title: Incremental Ensemble Gaussian Processes
- Authors: Qin Lu, Georgios V. Karanikolas, and Georgios B. Giannakis
- Abstract summary: We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
- Score: 53.3291389385672
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Belonging to the family of Bayesian nonparametrics, Gaussian process (GP)
based approaches have well-documented merits not only in learning over a rich
class of nonlinear functions, but also in quantifying the associated
uncertainty. However, most GP methods rely on a single preselected kernel
function, which may fall short in characterizing data samples that arrive
sequentially in time-critical applications. To enable {\it online} kernel
adaptation, the present work advocates an incremental ensemble (IE-) GP
framework, where an EGP meta-learner employs an {\it ensemble} of GP learners,
each having a unique kernel belonging to a prescribed kernel dictionary. With
each GP expert leveraging the random feature-based approximation to perform
online prediction and model update with {\it scalability}, the EGP meta-learner
capitalizes on data-adaptive weights to synthesize the per-expert predictions.
Further, the novel IE-GP is generalized to accommodate time-varying functions
by modeling structured dynamics at the EGP meta-learner and within each GP
learner. To benchmark the performance of IE-GP and its dynamic variant in the
adversarial setting where the modeling assumptions are violated, rigorous
performance analysis has been conducted via the notion of regret, as the norm
in online convex optimization. Last but not the least, online unsupervised
learning for dimensionality reduction is explored under the novel IE-GP
framework. Synthetic and real data tests demonstrate the effectiveness of the
proposed schemes.
Related papers
- Domain Invariant Learning for Gaussian Processes and Bayesian
Exploration [39.83530605880014]
We propose a domain invariant learning algorithm for Gaussian processes (DIL-GP) with a min-max optimization on the likelihood.
Numerical experiments demonstrate the superiority of DIL-GP for predictions on several synthetic and real-world datasets.
arXiv Detail & Related papers (2023-12-18T16:13:34Z) - A Kronecker product accelerated efficient sparse Gaussian Process
(E-SGP) for flow emulation [2.563626165548781]
This paper introduces an efficient sparse Gaussian process (E-SGP) for the surrogate modelling of fluid mechanics.
It is a further development of the approximated sparse GP algorithm, combining the concept of efficient GP (E-GP) and variational energy free sparse Gaussian process (VEF-SGP)
arXiv Detail & Related papers (2023-12-13T11:29:40Z) - Linear Time GPs for Inferring Latent Trajectories from Neural Spike
Trains [7.936841911281107]
We propose cvHM, a general inference framework for latent GP models leveraging Hida-Mat'ern kernels and conjugate variational inference (CVI)
We are able to perform variational inference of latent neural trajectories with linear time complexity for arbitrary likelihoods.
arXiv Detail & Related papers (2023-06-01T16:31:36Z) - Interactive Segmentation as Gaussian Process Classification [58.44673380545409]
Click-based interactive segmentation (IS) aims to extract the target objects under user interaction.
Most of the current deep learning (DL)-based methods mainly follow the general pipelines of semantic segmentation.
We propose to formulate the IS task as a Gaussian process (GP)-based pixel-wise binary classification model on each image.
arXiv Detail & Related papers (2023-02-28T14:01:01Z) - Weighted Ensembles for Active Learning with Adaptivity [60.84896785303314]
This paper presents an ensemble of GP models with weights adapted to the labeled data collected incrementally.
Building on this novel EGP model, a suite of acquisition functions emerges based on the uncertainty and disagreement rules.
An adaptively weighted ensemble of EGP-based acquisition functions is also introduced to further robustify performance.
arXiv Detail & Related papers (2022-06-10T11:48:49Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Robust and Adaptive Temporal-Difference Learning Using An Ensemble of
Gaussian Processes [70.80716221080118]
The paper takes a generative perspective on policy evaluation via temporal-difference (TD) learning.
The OS-GPTD approach is developed to estimate the value function for a given policy by observing a sequence of state-reward pairs.
To alleviate the limited expressiveness associated with a single fixed kernel, a weighted ensemble (E) of GP priors is employed to yield an alternative scheme.
arXiv Detail & Related papers (2021-12-01T23:15:09Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Sparse Gaussian Process Variational Autoencoders [24.86751422740643]
Existing approaches for performing inference in GP-DGMs do not support sparse GP approximations based on points.
We develop the sparse Gaussian processal variation autoencoder (GP-VAE) characterised by the use of partial inference networks for parameterising sparse GP approximations.
arXiv Detail & Related papers (2020-10-20T10:19:56Z) - Sequential Gaussian Processes for Online Learning of Nonstationary
Functions [9.997259201098602]
We propose a sequential Monte Carlo algorithm to fit infinite mixtures of GPs that capture non-stationary behavior while allowing for online, distributed inference.
Our approach empirically improves performance over state-of-the-art methods for online GP estimation in the presence of non-stationarity in time-series data.
arXiv Detail & Related papers (2019-05-24T02:29:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.