A Kronecker product accelerated efficient sparse Gaussian Process
(E-SGP) for flow emulation
- URL: http://arxiv.org/abs/2312.10023v1
- Date: Wed, 13 Dec 2023 11:29:40 GMT
- Title: A Kronecker product accelerated efficient sparse Gaussian Process
(E-SGP) for flow emulation
- Authors: Yu Duan, Matthew Eaton, Michael Bluck
- Abstract summary: This paper introduces an efficient sparse Gaussian process (E-SGP) for the surrogate modelling of fluid mechanics.
It is a further development of the approximated sparse GP algorithm, combining the concept of efficient GP (E-GP) and variational energy free sparse Gaussian process (VEF-SGP)
- Score: 2.563626165548781
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce an efficient sparse Gaussian process (E-SGP) for
the surrogate modelling of fluid mechanics. This novel Bayesian machine
learning algorithm allows efficient model training using databases of different
structures. It is a further development of the approximated sparse GP
algorithm, combining the concept of efficient GP (E-GP) and variational energy
free sparse Gaussian process (VEF-SGP). The developed E-SGP approach exploits
the arbitrariness of inducing points and the monotonically increasing nature of
the objective function with respect to the number of inducing points in
VEF-SGP. By specifying the inducing points on the orthogonal grid/input
subspace and using the Kronecker product, E-SGP significantly improves
computational efficiency without imposing any constraints on the covariance
matrix or increasing the number of parameters that need to be optimised during
training.
The E-SGP algorithm developed in this paper outperforms E-GP not only in
scalability but also in model quality in terms of mean standardized logarithmic
loss (MSLL). The computational complexity of E-GP suffers from the cubic growth
regarding the growing structured training database. However, E-SGP maintains
computational efficiency whilst the resolution of the model, (i.e., the number
of inducing points) remains fixed. The examples show that E-SGP produces more
accurate predictions in comparison with E-GP when the model resolutions are
similar in both. E-GP benefits from more training data but comes with higher
computational demands, while E-SGP achieves a comparable level of accuracy but
is more computationally efficient, making E-SGP a potentially preferable choice
for fluid mechanic problems. Furthermore, E-SGP can produce more reasonable
estimates of model uncertainty, whilst E-GP is more likely to produce
over-confident predictions.
Related papers
- Domain Invariant Learning for Gaussian Processes and Bayesian
Exploration [39.83530605880014]
We propose a domain invariant learning algorithm for Gaussian processes (DIL-GP) with a min-max optimization on the likelihood.
Numerical experiments demonstrate the superiority of DIL-GP for predictions on several synthetic and real-world datasets.
arXiv Detail & Related papers (2023-12-18T16:13:34Z) - Weighted Ensembles for Active Learning with Adaptivity [60.84896785303314]
This paper presents an ensemble of GP models with weights adapted to the labeled data collected incrementally.
Building on this novel EGP model, a suite of acquisition functions emerges based on the uncertainty and disagreement rules.
An adaptively weighted ensemble of EGP-based acquisition functions is also introduced to further robustify performance.
arXiv Detail & Related papers (2022-06-10T11:48:49Z) - Scaling Gaussian Process Optimization by Evaluating a Few Unique
Candidates Multiple Times [119.41129787351092]
We show that sequential black-box optimization based on GPs can be made efficient by sticking to a candidate solution for multiple evaluation steps.
We modify two well-established GP-Opt algorithms, GP-UCB and GP-EI to adapt rules from batched GP-Opt.
arXiv Detail & Related papers (2022-01-30T20:42:14Z) - A Sparse Expansion For Deep Gaussian Processes [33.29293167413832]
We propose an efficient scheme for accurate inference and efficient training based on a range of Gaussian Processes (TMGP)
Our numerical experiments on synthetic models and real datasets show the superior computational efficiency of DTMGP over existing DGP models.
arXiv Detail & Related papers (2021-12-11T00:59:33Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Incremental Ensemble Gaussian Processes [53.3291389385672]
We propose an incremental ensemble (IE-) GP framework, where an EGP meta-learner employs an it ensemble of GP learners, each having a unique kernel belonging to a prescribed kernel dictionary.
With each GP expert leveraging the random feature-based approximation to perform online prediction and model update with it scalability, the EGP meta-learner capitalizes on data-adaptive weights to synthesize the per-expert predictions.
The novel IE-GP is generalized to accommodate time-varying functions by modeling structured dynamics at the EGP meta-learner and within each GP learner.
arXiv Detail & Related papers (2021-10-13T15:11:25Z) - Deep Gaussian Process Emulation using Stochastic Imputation [0.0]
We propose a novel deep Gaussian process (DGP) inference method for computer model emulation using imputation.
Byally imputing the latent layers, the approach transforms the DGP into the linked GP, a state-of-the-art surrogate model formed by linking a system of feed-forward coupled GPs.
arXiv Detail & Related papers (2021-07-04T10:46:23Z) - Sparse Gaussian Process Variational Autoencoders [24.86751422740643]
Existing approaches for performing inference in GP-DGMs do not support sparse GP approximations based on points.
We develop the sparse Gaussian processal variation autoencoder (GP-VAE) characterised by the use of partial inference networks for parameterising sparse GP approximations.
arXiv Detail & Related papers (2020-10-20T10:19:56Z) - Likelihood-Free Inference with Deep Gaussian Processes [70.74203794847344]
Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
arXiv Detail & Related papers (2020-06-18T14:24:05Z) - Improving Sampling Accuracy of Stochastic Gradient MCMC Methods via
Non-uniform Subsampling of Gradients [54.90670513852325]
We propose a non-uniform subsampling scheme to improve the sampling accuracy.
EWSG is designed so that a non-uniform gradient-MCMC method mimics the statistical behavior of a batch-gradient-MCMC method.
In our practical implementation of EWSG, the non-uniform subsampling is performed efficiently via a Metropolis-Hastings chain on the data index.
arXiv Detail & Related papers (2020-02-20T18:56:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.