Domain Invariant Learning for Gaussian Processes and Bayesian
Exploration
- URL: http://arxiv.org/abs/2312.11318v1
- Date: Mon, 18 Dec 2023 16:13:34 GMT
- Title: Domain Invariant Learning for Gaussian Processes and Bayesian
Exploration
- Authors: Xilong Zhao, Siyuan Bian, Yaoyun Zhang, Yuliang Zhang, Qinying Gu,
Xinbing Wang, Chenghu Zhou and Nanyang Ye
- Abstract summary: We propose a domain invariant learning algorithm for Gaussian processes (DIL-GP) with a min-max optimization on the likelihood.
Numerical experiments demonstrate the superiority of DIL-GP for predictions on several synthetic and real-world datasets.
- Score: 39.83530605880014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Out-of-distribution (OOD) generalization has long been a challenging problem
that remains largely unsolved. Gaussian processes (GP), as popular
probabilistic model classes, especially in the small data regime, presume
strong OOD generalization abilities. Surprisingly, their OOD generalization
abilities have been under-explored before compared with other lines of GP
research. In this paper, we identify that GP is not free from the problem and
propose a domain invariant learning algorithm for Gaussian processes (DIL-GP)
with a min-max optimization on the likelihood. DIL-GP discovers the
heterogeneity in the data and forces invariance across partitioned subsets of
data. We further extend the DIL-GP to improve Bayesian optimization's
adaptability on changing environments. Numerical experiments demonstrate the
superiority of DIL-GP for predictions on several synthetic and real-world
datasets. We further demonstrate the effectiveness of the DIL-GP Bayesian
optimization method on a PID parameters tuning experiment for a quadrotor. The
full version and source code are available at:
https://github.com/Billzxl/DIL-GP.
Related papers
- Deep Transformed Gaussian Processes [0.0]
Transformed Gaussian Processes (TGPs) are processes specified by transforming samples from the joint distribution from a prior process (typically a GP) using an invertible transformation.
We propose a generalization of TGPs named Deep Transformed Gaussian Processes (DTGPs), which follows the trend of concatenating layers of processes.
Experiments conducted evaluate the proposed DTGPs in multiple regression datasets, achieving good scalability and performance.
arXiv Detail & Related papers (2023-10-27T16:09:39Z) - Shallow and Deep Nonparametric Convolutions for Gaussian Processes [0.0]
We introduce a nonparametric process convolution formulation for GPs that alleviates weaknesses by using a functional sampling approach.
We propose a composition of these nonparametric convolutions that serves as an alternative to classic deep GP models.
arXiv Detail & Related papers (2022-06-17T19:03:04Z) - Debiased Batch Normalization via Gaussian Process for Generalizable
Person Re-Identification [84.32086702849338]
Generalizable person re-identification aims to learn a model with only several labeled source domains that can perform well on unseen domains.
We propose a novel Debiased Batch Normalization via Gaussian Process approach (GDNorm) for generalizable person re-identification.
arXiv Detail & Related papers (2022-03-03T14:14:51Z) - Scaling Gaussian Process Optimization by Evaluating a Few Unique
Candidates Multiple Times [119.41129787351092]
We show that sequential black-box optimization based on GPs can be made efficient by sticking to a candidate solution for multiple evaluation steps.
We modify two well-established GP-Opt algorithms, GP-UCB and GP-EI to adapt rules from batched GP-Opt.
arXiv Detail & Related papers (2022-01-30T20:42:14Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - Latent Map Gaussian Processes for Mixed Variable Metamodeling [0.0]
We introduce latent map Gaussian processes (LMGPs) that inherit the attractive properties of GPs but are also applicable to mixed data.
We show that LMGPs can handle variable-length inputs and provide insights into how qualitative inputs affect the response or interact with each other.
We also provide a neural network interpretation of LMGPs and study the effect of prior latent representations on their performance.
arXiv Detail & Related papers (2021-02-07T22:21:53Z) - Sparse Gaussian Process Variational Autoencoders [24.86751422740643]
Existing approaches for performing inference in GP-DGMs do not support sparse GP approximations based on points.
We develop the sparse Gaussian processal variation autoencoder (GP-VAE) characterised by the use of partial inference networks for parameterising sparse GP approximations.
arXiv Detail & Related papers (2020-10-20T10:19:56Z) - Likelihood-Free Inference with Deep Gaussian Processes [70.74203794847344]
Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
arXiv Detail & Related papers (2020-06-18T14:24:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.