Bayesian Optimization through Gaussian Cox Process Models for
Spatio-temporal Data
- URL: http://arxiv.org/abs/2401.14544v1
- Date: Thu, 25 Jan 2024 22:26:15 GMT
- Title: Bayesian Optimization through Gaussian Cox Process Models for
Spatio-temporal Data
- Authors: Yongsheng Mei, Mahdi Imani, Tian Lan
- Abstract summary: We propose a novel maximum a posteriori inference of Gaussian Cox processes.
We further develop a Nystr"om approximation for efficient computation.
- Score: 27.922624489449017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian optimization (BO) has established itself as a leading strategy for
efficiently optimizing expensive-to-evaluate functions. Existing BO methods
mostly rely on Gaussian process (GP) surrogate models and are not applicable to
(doubly-stochastic) Gaussian Cox processes, where the observation process is
modulated by a latent intensity function modeled as a GP. In this paper, we
propose a novel maximum a posteriori inference of Gaussian Cox processes. It
leverages the Laplace approximation and change of kernel technique to transform
the problem into a new reproducing kernel Hilbert space, where it becomes more
tractable computationally. It enables us to obtain both a functional posterior
of the latent intensity function and the covariance of the posterior, thus
extending existing works that often focus on specific link functions or
estimating the posterior mean. Using the result, we propose a BO framework
based on the Gaussian Cox process model and further develop a Nystr\"om
approximation for efficient computation. Extensive evaluations on various
synthetic and real-world datasets demonstrate significant improvement over
state-of-the-art inference solutions for Gaussian Cox processes, as well as
effective BO with a wide range of acquisition functions designed through the
underlying Gaussian Cox process model.
Related papers
- Sample-efficient Bayesian Optimisation Using Known Invariances [56.34916328814857]
We show that vanilla and constrained BO algorithms are inefficient when optimising invariant objectives.
We derive a bound on the maximum information gain of these invariant kernels.
We use our method to design a current drive system for a nuclear fusion reactor, finding a high-performance solution.
arXiv Detail & Related papers (2024-10-22T12:51:46Z) - Exact Bayesian Gaussian Cox Processes Using Random Integral [0.0]
Posterior inference of an intensity function involves an intractable integral in the likelihood resulting in doubly intractable posterior distribution.
We propose a nonparametric Bayesian approach for estimating the intensity function of an inhomogeneous Poisson process without reliance on large data augmentation or approximations of the likelihood function.
We demonstrate the utility of our method in three real-world scenarios including temporal and spatial event data, as well as aggregated time count data collected at multiple resolutions.
arXiv Detail & Related papers (2024-06-28T08:11:33Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Enhancing Gaussian Process Surrogates for Optimization and Posterior Approximation via Random Exploration [2.984929040246293]
novel noise-free Bayesian optimization strategies that rely on a random exploration step to enhance the accuracy of Gaussian process surrogate models.
New algorithms retain the ease of implementation of the classical GP-UCB, but an additional exploration step facilitates their convergence.
arXiv Detail & Related papers (2024-01-30T14:16:06Z) - Promises and Pitfalls of the Linearized Laplace in Bayesian Optimization [73.80101701431103]
The linearized-Laplace approximation (LLA) has been shown to be effective and efficient in constructing Bayesian neural networks.
We study the usefulness of the LLA in Bayesian optimization and highlight its strong performance and flexibility.
arXiv Detail & Related papers (2023-04-17T14:23:43Z) - Extrinsic Bayesian Optimizations on Manifolds [1.3477333339913569]
We propose an extrinsic Bayesian optimization (eBO) framework for general optimization problems on Euclid manifold.
Our approach is to employ extrinsic Gaussian processes by first embedding the manifold onto some higher dimensionalean space.
This leads to efficient and scalable algorithms for optimization over complex manifold.
arXiv Detail & Related papers (2022-12-21T06:10:12Z) - Gaussian Processes and Statistical Decision-making in Non-Euclidean
Spaces [96.53463532832939]
We develop techniques for broadening the applicability of Gaussian processes.
We introduce a wide class of efficient approximations built from this viewpoint.
We develop a collection of Gaussian process models over non-Euclidean spaces.
arXiv Detail & Related papers (2022-02-22T01:42:57Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Directed particle swarm optimization with Gaussian-process-based
function forecasting [15.733136147164032]
Particle swarm optimization (PSO) is an iterative search method that moves a set of candidate solution around a search-space towards the best known global and local solutions with randomized step lengths.
We show that our algorithm attains desirable properties for exploratory and exploitative behavior.
arXiv Detail & Related papers (2021-02-08T13:02:57Z) - Global Optimization of Gaussian processes [52.77024349608834]
We propose a reduced-space formulation with trained Gaussian processes trained on few data points.
The approach also leads to significantly smaller and computationally cheaper sub solver for lower bounding.
In total, we reduce time convergence by orders of orders of the proposed method.
arXiv Detail & Related papers (2020-05-21T20:59:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.