Disentangling Derivatives, Uncertainty and Error in Gaussian Process
Models
- URL: http://arxiv.org/abs/2012.04947v1
- Date: Wed, 9 Dec 2020 10:03:13 GMT
- Title: Disentangling Derivatives, Uncertainty and Error in Gaussian Process
Models
- Authors: Juan Emmanuel Johnson and Valero Laparra and Gustau Camps-Valls
- Abstract summary: We showcase how the derivative of a GP model can be used to provide an analytical error propagation formulation.
We analyze the predictive variance and the propagated error terms in a temperature prediction problem from infrared sounding data.
- Score: 12.229461458053809
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Gaussian Processes (GPs) are a class of kernel methods that have shown to be
very useful in geoscience applications. They are widely used because they are
simple, flexible and provide very accurate estimates for nonlinear problems,
especially in parameter retrieval. An addition to a predictive mean function,
GPs come equipped with a useful property: the predictive variance function
which provides confidence intervals for the predictions. The GP formulation
usually assumes that there is no input noise in the training and testing
points, only in the observations. However, this is often not the case in Earth
observation problems where an accurate assessment of the instrument error is
usually available. In this paper, we showcase how the derivative of a GP model
can be used to provide an analytical error propagation formulation and we
analyze the predictive variance and the propagated error terms in a temperature
prediction problem from infrared sounding data.
Related papers
- Stationarity without mean reversion in improper Gaussian processes [6.4322891559626125]
We show that it is possible to use improper GP priors with infinite variance to define processes that are stationary but not mean reverting.
By analyzing both synthetic and real data, we demonstrate that these non-positive kernels solve some known pathologies of mean reverting GP regression.
arXiv Detail & Related papers (2023-10-04T15:11:26Z) - Episodic Gaussian Process-Based Learning Control with Vanishing Tracking
Errors [10.627020714408445]
We develop an episodic approach for learning GP models, such that an arbitrary tracking accuracy can be guaranteed.
The effectiveness of the derived theory is demonstrated in several simulations.
arXiv Detail & Related papers (2023-07-10T08:43:28Z) - Leveraging Locality and Robustness to Achieve Massively Scalable
Gaussian Process Regression [1.3518297878940662]
We introduce a new perspective by exploring robustness properties and limiting behaviour of GP nearest-neighbour (GPnn) prediction.
As the data-size n increases, accuracy of estimated parameters and GP model assumptions become increasingly irrelevant to GPnn predictive accuracy.
We show that this source of inaccuracy can be corrected for, thereby achieving both well-calibrated uncertainty measures and accurate predictions at remarkably low computational cost.
arXiv Detail & Related papers (2023-06-26T14:32:46Z) - Light curve completion and forecasting using fast and scalable Gaussian
processes (MuyGPs) [0.0]
Ground-based observations from commercial off the shelf (COTS) cameras remain inexpensive compared to higher precision instruments.
limited sensor availability combined with noisier observations can produce gappy time-series data.
Deep Neural Networks (DNNs) have become the tool of choice due to their empirical success at learning complex nonlinear embeddings.
arXiv Detail & Related papers (2022-08-31T01:52:00Z) - Experimental Design for Linear Functionals in Reproducing Kernel Hilbert
Spaces [102.08678737900541]
We provide algorithms for constructing bias-aware designs for linear functionals.
We derive non-asymptotic confidence sets for fixed and adaptive designs under sub-Gaussian noise.
arXiv Detail & Related papers (2022-05-26T20:56:25Z) - Discovering Invariant Rationales for Graph Neural Networks [104.61908788639052]
Intrinsic interpretability of graph neural networks (GNNs) is to find a small subset of the input graph's features.
We propose a new strategy of discovering invariant rationale (DIR) to construct intrinsically interpretable GNNs.
arXiv Detail & Related papers (2022-01-30T16:43:40Z) - Non-Gaussian Gaussian Processes for Few-Shot Regression [71.33730039795921]
We propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them.
NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.
arXiv Detail & Related papers (2021-10-26T10:45:25Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Uncertainty quantification using martingales for misspecified Gaussian
processes [52.22233158357913]
We address uncertainty quantification for Gaussian processes (GPs) under misspecified priors.
We construct a confidence sequence (CS) for the unknown function using martingale techniques.
Our CS is statistically valid and empirically outperforms standard GP methods.
arXiv Detail & Related papers (2020-06-12T17:58:59Z) - Accounting for Input Noise in Gaussian Process Parameter Retrieval [9.563129471152058]
We show how one can account for input noise estimates using a GP model formulation which propagates the error terms using the derivative of the predictive mean function.
We analyze the resulting predictive variance term and show how they more accurately represent the model error in a temperature prediction problem from infrared sounding data.
arXiv Detail & Related papers (2020-05-20T08:23:48Z) - SLEIPNIR: Deterministic and Provably Accurate Feature Expansion for
Gaussian Process Regression with Derivatives [86.01677297601624]
We propose a novel approach for scaling GP regression with derivatives based on quadrature Fourier features.
We prove deterministic, non-asymptotic and exponentially fast decaying error bounds which apply for both the approximated kernel as well as the approximated posterior.
arXiv Detail & Related papers (2020-03-05T14:33:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.