Consistency of some sequential experimental design strategies for
excursion set estimation based on vector-valued Gaussian processes
- URL: http://arxiv.org/abs/2310.07315v1
- Date: Wed, 11 Oct 2023 09:02:03 GMT
- Title: Consistency of some sequential experimental design strategies for
excursion set estimation based on vector-valued Gaussian processes
- Authors: Philip Stange and David Ginsbourger
- Abstract summary: We tackle the extension to the vector-valued case of consistency results for Stepwise Uncertainty Reduction sequential experimental design strategies.
We apply results to the Integrated Bernoulli Variance and the Expected Measure Variance uncertainty functionals employed in [Fossum et al., Learning excursion sets of vector-valued Gaussian random fields for autonomous ocean sampling, The Annals of Applied Statistics 15, 2021] for the estimation for excursion sets of vector-valued functions.
- Score: 0.32634122554914
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We tackle the extension to the vector-valued case of consistency results for
Stepwise Uncertainty Reduction sequential experimental design strategies
established in [Bect et al., A supermartingale approach to Gaussian process
based sequential design of experiments, Bernoulli 25, 2019]. This lead us in
the first place to clarify, assuming a compact index set, how the connection
between continuous Gaussian processes and Gaussian measures on the Banach space
of continuous functions carries over to vector-valued settings. From there, a
number of concepts and properties from the aforementioned paper can be readily
extended. However, vector-valued settings do complicate things for some
results, mainly due to the lack of continuity for the pseudo-inverse mapping
that affects the conditional mean and covariance function given finitely many
pointwise observations. We apply obtained results to the Integrated Bernoulli
Variance and the Expected Measure Variance uncertainty functionals employed in
[Fossum et al., Learning excursion sets of vector-valued Gaussian random fields
for autonomous ocean sampling, The Annals of Applied Statistics 15, 2021] for
the estimation for excursion sets of vector-valued functions.
Related papers
- Refined Risk Bounds for Unbounded Losses via Transductive Priors [58.967816314671296]
We revisit the sequential variants of linear regression with the squared loss, classification problems with hinge loss, and logistic regression.
Our key tools are based on the exponential weights algorithm with carefully chosen transductive priors.
arXiv Detail & Related papers (2024-10-29T00:01:04Z) - Generalized Variational Inference in Function Spaces: Gaussian Measures
meet Bayesian Deep Learning [9.106412307976067]
We develop a framework for generalized variational inference in infinite-dimensional function spaces.
We use it to construct a method termed Gaussian Wasserstein inference (GWI)
An exciting application of GWI is the ability to use deep neural networks in the variational parametrisation of GWI.
arXiv Detail & Related papers (2022-05-12T20:10:31Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - How Good are Low-Rank Approximations in Gaussian Process Regression? [28.392890577684657]
We provide guarantees for approximate Gaussian Process (GP) regression resulting from two common low-rank kernel approximations.
We provide experiments on both simulated data and standard benchmarks to evaluate the effectiveness of our theoretical bounds.
arXiv Detail & Related papers (2021-12-13T04:04:08Z) - Sensing Cox Processes via Posterior Sampling and Positive Bases [56.82162768921196]
We study adaptive sensing of point processes, a widely used model from spatial statistics.
We model the intensity function as a sample from a truncated Gaussian process, represented in a specially constructed positive basis.
Our adaptive sensing algorithms use Langevin dynamics and are based on posterior sampling (textscCox-Thompson) and top-two posterior sampling (textscTop2) principles.
arXiv Detail & Related papers (2021-10-21T14:47:06Z) - Estimation of Riemannian distances between covariance operators and
Gaussian processes [0.7360807642941712]
We study two distances between infinite-dimensional positive definite Hilbert-Schmidt operators.
Results show that both distances converge in the Hilbert-Schmidt norm.
arXiv Detail & Related papers (2021-08-26T09:57:47Z) - Multivariate Probabilistic Regression with Natural Gradient Boosting [63.58097881421937]
We propose a Natural Gradient Boosting (NGBoost) approach based on nonparametrically modeling the conditional parameters of the multivariate predictive distribution.
Our method is robust, works out-of-the-box without extensive tuning, is modular with respect to the assumed target distribution, and performs competitively in comparison to existing approaches.
arXiv Detail & Related papers (2021-06-07T17:44:49Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z) - How Good are Low-Rank Approximations in Gaussian Process Regression? [24.09582049403961]
We provide guarantees for approximate Gaussian Process (GP) regression resulting from two common low-rank kernel approximations.
We provide experiments on both simulated data and standard benchmarks to evaluate the effectiveness of our theoretical bounds.
arXiv Detail & Related papers (2020-04-03T14:15:10Z) - Gaussianization Flows [113.79542218282282]
We propose a new type of normalizing flow model that enables both efficient iteration of likelihoods and efficient inversion for sample generation.
Because of this guaranteed expressivity, they can capture multimodal target distributions without compromising the efficiency of sample generation.
arXiv Detail & Related papers (2020-03-04T08:15:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.