Practical Global and Local Bounds in Gaussian Process Regression via Chaining
- URL: http://arxiv.org/abs/2511.09144v2
- Date: Mon, 17 Nov 2025 13:26:40 GMT
- Title: Practical Global and Local Bounds in Gaussian Process Regression via Chaining
- Authors: Junyi Liu, Stanley Kok,
- Abstract summary: We propose a chaining-based framework for estimating upper and lower bounds on the expected extreme values over unseen data.<n>We also develop a novel method for local uncertainty quantification at specified inputs.
- Score: 4.500208956289746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gaussian process regression (GPR) is a popular nonparametric Bayesian method that provides predictive uncertainty estimates and is widely used in safety-critical applications. While prior research has introduced various uncertainty bounds, most existing approaches require access to specific input features, and rely on posterior mean and variance estimates or the tuning of hyperparameters. These limitations hinder robustness and fail to capture the model's global behavior in expectation. To address these limitations, we propose a chaining-based framework for estimating upper and lower bounds on the expected extreme values over unseen data, without requiring access to specific input features. We provide kernel-specific refinements for commonly used kernels such as RBF and Matérn, in which our bounds are tighter than generic constructions. We further improve numerical tightness by avoiding analytical relaxations. In addition to global estimation, we also develop a novel method for local uncertainty quantification at specified inputs. This approach leverages chaining geometry through partition diameters, adapting to local structures without relying on posterior variance scaling. Our experimental results validate the theoretical findings and demonstrate that our method outperforms existing approaches on both synthetic and real-world datasets.
Related papers
- Mind the Jumps: A Scalable Robust Local Gaussian Process for Multidimensional Response Surfaces with Discontinuities [1.1458853556386799]
Robust Local Gaussian Process is a framework that integrates adaptive nearest-neighbor selection with a sparsity-driven robustification mechanism.<n>It consistently delivers high predictive accuracy and maintains competitive computational efficiency.<n>These results establish RLGP as an effective and practical solution for modeling nonstationary and discontinuous response surfaces.
arXiv Detail & Related papers (2025-12-14T06:52:17Z) - Conformal and kNN Predictive Uncertainty Quantification Algorithms in Metric Spaces [3.637162892228131]
We develop a conformal prediction algorithm that offers finite-sample coverage guarantees and fast convergence rates of the oracle estimator.<n>In heteroscedastic settings, we forgo these non-asymptotic guarantees to gain statistical efficiency.<n>We demonstrate the practical utility of our approach in personalized--medicine applications involving random response objects.
arXiv Detail & Related papers (2025-07-21T15:54:13Z) - Model-Free Kernel Conformal Depth Measures Algorithm for Uncertainty Quantification in Regression Models in Separable Hilbert Spaces [9.504740492278003]
We propose a model-free uncertainty quantification algorithm based on conditional depth measures and an integrated depth measure.<n>New algorithms can be used to define prediction and tolerance regions when predictors and responses are defined in separable Hilbert spaces.<n>We demonstrate the practical relevance of our approach through a digital health application related to physical activity.
arXiv Detail & Related papers (2025-06-10T01:25:37Z) - Principled Input-Output-Conditioned Post-Hoc Uncertainty Estimation for Regression Networks [1.4671424999873808]
Uncertainty is critical in safety-sensitive applications but is often omitted from off-the-shelf neural networks due to adverse effects on predictive performance.<n>We propose a theoretically grounded framework for post-hoc uncertainty estimation in regression tasks by fitting an auxiliary model to both original inputs and frozen model outputs.
arXiv Detail & Related papers (2025-06-01T09:13:27Z) - Fixed-Mean Gaussian Processes for Post-hoc Bayesian Deep Learning [11.22428369342346]
We introduce a novel family of sparse variational Gaussian processes (GPs), where the posterior mean is fixed to any continuous function when using a universal kernel.<n>Specifically, we fix the mean of this GP to the output of the pre-trained DNN, allowing our approach to effectively fit the GP's predictive variances to estimate the prediction uncertainty.<n> Experimental results demonstrate that FMGP improves both uncertainty estimation and computational efficiency when compared to state-of-the-art methods.
arXiv Detail & Related papers (2024-12-05T14:17:16Z) - Average Causal Effect Estimation in DAGs with Hidden Variables: Beyond Back-Door and Front-Door Criteria [0.8880611506199766]
We introduce a novel one-step corrected plug-in and targeted minimum loss-based estimators of causal effects for a class of hidden variable DAGs.<n>These estimators leverage data-adaptive machine learning algorithms to minimize modeling assumptions.
arXiv Detail & Related papers (2024-09-06T01:07:29Z) - Information-Theoretic Safe Exploration with Gaussian Processes [89.31922008981735]
We consider a sequential decision making task where we are not allowed to evaluate parameters that violate an unknown (safety) constraint.
Most current methods rely on a discretization of the domain and cannot be directly extended to the continuous case.
We propose an information-theoretic safe exploration criterion that directly exploits the GP posterior to identify the most informative safe parameters to evaluate.
arXiv Detail & Related papers (2022-12-09T15:23:58Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Posterior and Computational Uncertainty in Gaussian Processes [52.26904059556759]
Gaussian processes scale prohibitively with the size of the dataset.
Many approximation methods have been developed, which inevitably introduce approximation error.
This additional source of uncertainty, due to limited computation, is entirely ignored when using the approximate posterior.
We develop a new class of methods that provides consistent estimation of the combined uncertainty arising from both the finite number of data observed and the finite amount of computation expended.
arXiv Detail & Related papers (2022-05-30T22:16:25Z) - Simple Calibration via Geodesic Kernels [9.999034479498889]
Deep discriminative approaches, such as decision forests and deep neural networks, have recently found applications in many important real-world scenarios.<n>However, deploying these learning algorithms in safety-critical applications raises concerns, particularly when it comes to ensuring calibration for both in-distribution and out-of-distribution regions.
arXiv Detail & Related papers (2022-01-31T05:07:16Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Gaussian Process Uniform Error Bounds with Unknown Hyperparameters for
Safety-Critical Applications [71.23286211775084]
We introduce robust Gaussian process uniform error bounds in settings with unknown hyper parameters.
Our approach computes a confidence region in the space of hyper parameters, which enables us to obtain a probabilistic upper bound for the model error.
Experiments show that the bound performs significantly better than vanilla and fully Bayesian processes.
arXiv Detail & Related papers (2021-09-06T17:10:01Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.