Trustworthy Prediction with Gaussian Process Knowledge Scores
- URL: http://arxiv.org/abs/2506.18630v1
- Date: Mon, 23 Jun 2025 13:36:06 GMT
- Title: Trustworthy Prediction with Gaussian Process Knowledge Scores
- Authors: Kurt Butler, Guanchao Feng, Tong Chen, Petar Djuric,
- Abstract summary: Probabilistic models are often used to make predictions in regions of the data space where no observations are available.<n>We propose a knowledge score for predictions that quantifies the extent to which observing data have reduced our uncertainty about a prediction.<n>We demonstrate in several experiments that the knowledge score can anticipate when predictions from a GPR model are accurate.
- Score: 7.090362431002478
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Probabilistic models are often used to make predictions in regions of the data space where no observations are available, but it is not always clear whether such predictions are well-informed by previously seen data. In this paper, we propose a knowledge score for predictions from Gaussian process regression (GPR) models that quantifies the extent to which observing data have reduced our uncertainty about a prediction. The knowledge score is interpretable and naturally bounded between 0 and 1. We demonstrate in several experiments that the knowledge score can anticipate when predictions from a GPR model are accurate, and that this anticipation improves performance in tasks such as anomaly detection, extrapolation, and missing data imputation. Source code for this project is available online at https://github.com/KurtButler/GP-knowledge.
Related papers
- Augmented prediction of a true class for Positive Unlabeled data under selection bias [0.8594140167290099]
We introduce a new observational setting for Positive Unlabeled (PU) data where the observations at prediction time are also labeled.
We argue that the additional information is important for prediction, and call this task "augmented PU prediction"
We introduce several variants of the empirical Bayes rule in such scenario and investigate their performance.
arXiv Detail & Related papers (2024-07-14T19:58:01Z) - Selective Temporal Knowledge Graph Reasoning [70.11788354442218]
Temporal Knowledge Graph (TKG) aims to predict future facts based on given historical ones.
Existing TKG reasoning models are unable to abstain from predictions they are uncertain.
We propose an abstention mechanism for TKG reasoning, which helps the existing models make selective, instead of indiscriminate, predictions.
arXiv Detail & Related papers (2024-04-02T06:56:21Z) - Best of Many in Both Worlds: Online Resource Allocation with Predictions under Unknown Arrival Model [16.466711636334587]
Online decision-makers often obtain predictions on future variables, such as arrivals, demands, and so on.
Prediction accuracy is unknown to decision-makers a priori, hence blindly following the predictions can be harmful.
We develop algorithms that utilize predictions in a manner that is robust to the unknown prediction accuracy.
arXiv Detail & Related papers (2024-02-21T04:57:32Z) - Efficient learning of nonlinear prediction models with time-series
privileged information [11.679648862014655]
We show that for prediction in linear-Gaussian dynamical systems, a LuPI learner with access to intermediate time series data is never worse than any unbiased classical learner.
We propose algorithms based on random features and representation learning for the case when this map is unknown.
arXiv Detail & Related papers (2022-09-15T05:56:36Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Robust uncertainty estimates with out-of-distribution pseudo-inputs
training [0.0]
We propose to explicitly train the uncertainty predictor where we are not given data to make it reliable.
As one cannot train without data, we provide mechanisms for generating pseudo-inputs in informative low-density regions of the input space.
With a holistic evaluation, we demonstrate that this yields robust and interpretable predictions of uncertainty while retaining state-of-the-art performance on diverse tasks.
arXiv Detail & Related papers (2022-01-15T17:15:07Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Incorporating Causal Graphical Prior Knowledge into Predictive Modeling
via Simple Data Augmentation [92.96204497841032]
Causal graphs (CGs) are compact representations of the knowledge of the data generating processes behind the data distributions.
We propose a model-agnostic data augmentation method that allows us to exploit the prior knowledge of the conditional independence (CI) relations.
We experimentally show that the proposed method is effective in improving the prediction accuracy, especially in the small-data regime.
arXiv Detail & Related papers (2021-02-27T06:13:59Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Accounting for Input Noise in Gaussian Process Parameter Retrieval [9.563129471152058]
We show how one can account for input noise estimates using a GP model formulation which propagates the error terms using the derivative of the predictive mean function.
We analyze the resulting predictive variance term and show how they more accurately represent the model error in a temperature prediction problem from infrared sounding data.
arXiv Detail & Related papers (2020-05-20T08:23:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.