Mathematical Properties of Continuous Ranked Probability Score
Forecasting
- URL: http://arxiv.org/abs/2205.04360v1
- Date: Mon, 9 May 2022 15:01:13 GMT
- Title: Mathematical Properties of Continuous Ranked Probability Score
Forecasting
- Authors: Romain Pic and Cl\'ement Dombry and Philippe Naveau and Maxime
Taillardat
- Abstract summary: We study the rate of convergence in terms of CRPS of distributional regression methods.
We show that the k-nearest neighbor method and the kernel method for the distributional regression reach the optimal rate of convergence in dimension $dgeq2$.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The theoretical advances on the properties of scoring rules over the past
decades have broaden the use of scoring rules in probabilistic forecasting. In
meteorological forecasting, statistical postprocessing techniques are essential
to improve the forecasts made by deterministic physical models. Numerous
state-of-the-art statistical postprocessing techniques are based on
distributional regression evaluated with the Continuous Ranked Probability
Score (CRPS). However, theoretical properties of such minimization of the CRPS
have mostly considered the unconditional framework (i.e. without covariables)
and infinite sample sizes. We circumvent these limitations and study the rate
of convergence in terms of CRPS of distributional regression methods We find
the optimal minimax rate of convergence for a given class of distributions.
Moreover, we show that the k-nearest neighbor method and the kernel method for
the distributional regression reach the optimal rate of convergence in
dimension $d\geq2$ and in any dimension, respectively.
Related papers
- Statistical Inference for Temporal Difference Learning with Linear Function Approximation [62.69448336714418]
Temporal Difference (TD) learning, arguably the most widely used for policy evaluation, serves as a natural framework for this purpose.
In this paper, we study the consistency properties of TD learning with Polyak-Ruppert averaging and linear function approximation, and obtain three significant improvements over existing results.
arXiv Detail & Related papers (2024-10-21T15:34:44Z) - Conformal Thresholded Intervals for Efficient Regression [9.559062601251464]
Conformal Thresholded Intervals (CTI) is a novel conformal regression method that aims to produce the smallest possible prediction set with guaranteed coverage.
CTI constructs prediction sets by thresholding the estimated conditional interquantile intervals based on their length.
arXiv Detail & Related papers (2024-07-19T17:47:08Z) - Probabilistic Conformal Prediction with Approximate Conditional Validity [81.30551968980143]
We develop a new method for generating prediction sets that combines the flexibility of conformal methods with an estimate of the conditional distribution.
Our method consistently outperforms existing approaches in terms of conditional coverage.
arXiv Detail & Related papers (2024-07-01T20:44:48Z) - Statistical Optimality of Divide and Conquer Kernel-based Functional
Linear Regression [1.7227952883644062]
This paper studies the convergence performance of divide-and-conquer estimators in the scenario that the target function does not reside in the underlying kernel space.
As a decomposition-based scalable approach, the divide-and-conquer estimators of functional linear regression can substantially reduce the algorithmic complexities in time and memory.
arXiv Detail & Related papers (2022-11-20T12:29:06Z) - Distributional Gradient Boosting Machines [77.34726150561087]
Our framework is based on XGBoost and LightGBM.
We show that our framework achieves state-of-the-art forecast accuracy.
arXiv Detail & Related papers (2022-04-02T06:32:19Z) - Efficient CDF Approximations for Normalizing Flows [64.60846767084877]
We build upon the diffeomorphic properties of normalizing flows to estimate the cumulative distribution function (CDF) over a closed region.
Our experiments on popular flow architectures and UCI datasets show a marked improvement in sample efficiency as compared to traditional estimators.
arXiv Detail & Related papers (2022-02-23T06:11:49Z) - Probabilistic learning inference of boundary value problem with
uncertainties based on Kullback-Leibler divergence under implicit constraints [0.0]
We present a general methodology of a probabilistic learning inference that allows for estimating a posterior probability model for a boundary value problem from a prior probability model.
A statistical surrogate model of the implicit mapping, which represents the constraints, is introduced.
In a second part, an application is presented to illustrate the proposed theory and is also, as such, a contribution to the three-dimensional homogenization of heterogeneous linear elastic media.
arXiv Detail & Related papers (2022-02-10T16:00:10Z) - Probabilistic Forecasting with Generative Networks via Scoring Rule
Minimization [5.5643498845134545]
We use generative neural networks to parametrize distributions on high-dimensional spaces by transforming draws from a latent variable.
We train generative networks to minimize a predictive-sequential (or prequential) scoring rule on a recorded temporal sequence of the phenomenon of interest.
Our method outperforms state-of-the-art adversarial approaches, especially in probabilistic calibration.
arXiv Detail & Related papers (2021-12-15T15:51:12Z) - Heavy-tailed Streaming Statistical Estimation [58.70341336199497]
We consider the task of heavy-tailed statistical estimation given streaming $p$ samples.
We design a clipped gradient descent and provide an improved analysis under a more nuanced condition on the noise of gradients.
arXiv Detail & Related papers (2021-08-25T21:30:27Z) - Evaluating probabilistic classifiers: Reliability diagrams and score
decompositions revisited [68.8204255655161]
We introduce the CORP approach, which generates provably statistically Consistent, Optimally binned, and Reproducible reliability diagrams in an automated way.
Corpor is based on non-parametric isotonic regression and implemented via the Pool-adjacent-violators (PAV) algorithm.
arXiv Detail & Related papers (2020-08-07T08:22:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.