Likelihood-Free Inference with Deep Gaussian Processes
- URL: http://arxiv.org/abs/2006.10571v2
- Date: Tue, 5 Oct 2021 11:20:24 GMT
- Title: Likelihood-Free Inference with Deep Gaussian Processes
- Authors: Alexander Aushev, Henri Pesonen, Markus Heinonen, Jukka Corander,
Samuel Kaski
- Abstract summary: Surrogate models have been successfully used in likelihood-free inference to decrease the number of simulator evaluations.
We propose a Deep Gaussian Process (DGP) surrogate model that can handle more irregularly behaved target distributions.
Our experiments show how DGPs can outperform GPs on objective functions with multimodal distributions and maintain a comparable performance in unimodal cases.
- Score: 70.74203794847344
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, surrogate models have been successfully used in
likelihood-free inference to decrease the number of simulator evaluations. The
current state-of-the-art performance for this task has been achieved by
Bayesian Optimization with Gaussian Processes (GPs). While this combination
works well for unimodal target distributions, it is restricting the flexibility
and applicability of Bayesian Optimization for accelerating likelihood-free
inference more generally. We address this problem by proposing a Deep Gaussian
Process (DGP) surrogate model that can handle more irregularly behaved target
distributions. Our experiments show how DGPs can outperform GPs on objective
functions with multimodal distributions and maintain a comparable performance
in unimodal cases. This confirms that DGPs as surrogate models can extend the
applicability of Bayesian Optimization for likelihood-free inference (BOLFI),
while adding computational overhead that remains negligible for computationally
intensive simulators.
Related papers
- Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - Neural Operator Variational Inference based on Regularized Stein
Discrepancy for Deep Gaussian Processes [23.87733307119697]
We introduce Neural Operator Variational Inference (NOVI) for Deep Gaussian Processes.
NOVI uses a neural generator to obtain a sampler and minimizes the Regularized Stein Discrepancy in L2 space between the generated distribution and true posterior.
We demonstrate that the bias introduced by our method can be controlled by multiplying the divergence with a constant, which leads to robust error control and ensures the stability and precision of the algorithm.
arXiv Detail & Related papers (2023-09-22T06:56:35Z) - Fantasizing with Dual GPs in Bayesian Optimization and Active Learning [14.050425158209826]
We focus on fantasizing' batch acquisition functions that need the ability to condition on new fantasized data.
By using a sparse Dual GP parameterization, we gain linear scaling with batch size as well as one-step updates for non-Gaussian likelihoods.
arXiv Detail & Related papers (2022-11-02T11:37:06Z) - Surrogate modeling for Bayesian optimization beyond a single Gaussian
process [62.294228304646516]
We propose a novel Bayesian surrogate model to balance exploration with exploitation of the search space.
To endow function sampling with scalability, random feature-based kernel approximation is leveraged per GP model.
To further establish convergence of the proposed EGP-TS to the global optimum, analysis is conducted based on the notion of Bayesian regret.
arXiv Detail & Related papers (2022-05-27T16:43:10Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Gaussian Processes to speed up MCMC with automatic
exploratory-exploitation effect [1.0742675209112622]
We present a two-stage Metropolis-Hastings algorithm for sampling probabilistic models.
The key feature of the approach is the ability to learn the target distribution from scratch while sampling.
arXiv Detail & Related papers (2021-09-28T17:43:25Z) - Randomised Gaussian Process Upper Confidence Bound for Bayesian
Optimisation [60.93091603232817]
We develop a modified Gaussian process upper confidence bound (GP-UCB) acquisition function.
This is done by sampling the exploration-exploitation trade-off parameter from a distribution.
We prove that this allows the expected trade-off parameter to be altered to better suit the problem without compromising a bound on the function's Bayesian regret.
arXiv Detail & Related papers (2020-06-08T00:28:41Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z) - Sparse Gaussian Processes Revisited: Bayesian Approaches to
Inducing-Variable Approximations [27.43948386608]
Variational inference techniques based on inducing variables provide an elegant framework for scalable estimation in Gaussian process (GP) models.
In this work we challenge the common wisdom that optimizing the inducing inputs in variational framework yields optimal performance.
arXiv Detail & Related papers (2020-03-06T08:53:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.