Calibrated Adaptive Probabilistic ODE Solvers
- URL: http://arxiv.org/abs/2012.08202v2
- Date: Mon, 22 Feb 2021 10:48:28 GMT
- Title: Calibrated Adaptive Probabilistic ODE Solvers
- Authors: Nathanael Bosch, Philipp Hennig, Filip Tronarp
- Abstract summary: We introduce, discuss, and assess several probabilistically motivated ways to calibrate the uncertainty estimate.
We demonstrate the efficiency of the methodology by benchmarking against the classic, widely used Dormand-Prince 4/5 Runge-Kutta method.
- Score: 31.442275669185626
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Probabilistic solvers for ordinary differential equations assign a posterior
measure to the solution of an initial value problem. The joint covariance of
this distribution provides an estimate of the (global) approximation error. The
contraction rate of this error estimate as a function of the solver's step size
identifies it as a well-calibrated worst-case error, but its explicit numerical
value for a certain step size is not automatically a good estimate of the
explicit error. Addressing this issue, we introduce, discuss, and assess
several probabilistically motivated ways to calibrate the uncertainty estimate.
Numerical experiments demonstrate that these calibration methods interact
efficiently with adaptive step-size selection, resulting in descriptive, and
efficiently computable posteriors. We demonstrate the efficiency of the
methodology by benchmarking against the classic, widely used Dormand-Prince 4/5
Runge-Kutta method.
Related papers
- Adaptive Ensemble Q-learning: Minimizing Estimation Bias via Error
Feedback [31.115084475673793]
The ensemble method is a promising way to mitigate the overestimation issue in Q-learning.
It is known that the estimation bias hinges heavily on the ensemble size.
We devise an ensemble method with two key steps: (a) approximation error characterization which serves as the feedback for flexibly controlling the ensemble size, and (b) ensemble size adaptation tailored towards minimizing the estimation bias.
arXiv Detail & Related papers (2023-06-20T22:06:14Z) - Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - A Consistent and Differentiable Lp Canonical Calibration Error Estimator [21.67616079217758]
Deep neural networks are poorly calibrated and tend to output overconfident predictions.
We propose a low-bias, trainable calibration error estimator based on Dirichlet kernel density estimates.
Our method has a natural choice of kernel, and can be used to generate consistent estimates of other quantities.
arXiv Detail & Related papers (2022-10-13T15:11:11Z) - Parametric and Multivariate Uncertainty Calibration for Regression and
Object Detection [4.630093015127541]
We show that common detection models overestimate the spatial uncertainty in comparison to the observed error.
Our experiments show that the simple Isotonic Regression recalibration method is sufficient to achieve a good calibrated uncertainty.
In contrast, if normal distributions are required for subsequent processes, our GP-Normal recalibration method yields the best results.
arXiv Detail & Related papers (2022-07-04T08:00:20Z) - Scalable Personalised Item Ranking through Parametric Density Estimation [53.44830012414444]
Learning from implicit feedback is challenging because of the difficult nature of the one-class problem.
Most conventional methods use a pairwise ranking approach and negative samplers to cope with the one-class problem.
We propose a learning-to-rank approach, which achieves convergence speed comparable to the pointwise counterpart.
arXiv Detail & Related papers (2021-05-11T03:38:16Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - Mean-squared-error-based adaptive estimation of pure quantum states and
unitary transformations [0.0]
We propose a method to estimate with high accuracy pure quantum states of a single qudit.
Our method is based on the minimization of the squared error between the complex probability amplitudes of the unknown state and its estimate.
We show that our estimation procedure can be easily extended to estimate unknown unitary transformations acting on a single qudit.
arXiv Detail & Related papers (2020-08-23T00:32:10Z) - Calibration of Neural Networks using Splines [51.42640515410253]
Measuring calibration error amounts to comparing two empirical distributions.
We introduce a binning-free calibration measure inspired by the classical Kolmogorov-Smirnov (KS) statistical test.
Our method consistently outperforms existing methods on KS error as well as other commonly used calibration measures.
arXiv Detail & Related papers (2020-06-23T07:18:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.