Forecast Evaluation in Large Cross-Sections of Realized Volatility
- URL: http://arxiv.org/abs/2112.04887v1
- Date: Thu, 9 Dec 2021 13:19:09 GMT
- Title: Forecast Evaluation in Large Cross-Sections of Realized Volatility
- Authors: Christis Katsouris
- Abstract summary: We evaluate the predictive accuracy of the model based on the augmented cross-section when forecasting Realized volatility.
We study the sensitivity of forecasts to the model specification by incorporating a measurement error correction as well as cross-sectional jump component measures.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we consider the forecast evaluation of realized volatility
measures under cross-section dependence using equal predictive accuracy testing
procedures. We evaluate the predictive accuracy of the model based on the
augmented cross-section when forecasting Realized Volatility. Under the null
hypothesis of equal predictive accuracy the benchmark model employed is a
standard HAR model while under the alternative of non-equal predictive accuracy
the forecast model is an augmented HAR model estimated via the LASSO shrinkage.
We study the sensitivity of forecasts to the model specification by
incorporating a measurement error correction as well as cross-sectional jump
component measures. The out-of-sample forecast evaluation of the models is
assessed with numerical implementations.
Related papers
- Enforcing tail calibration when training probabilistic forecast models [0.0]
We study how the loss function used to train probabilistic forecast models can be adapted to improve the reliability of forecasts made for extreme events.<n>We demonstrate that state-of-the-art models do not issue calibrated forecasts for extreme wind speeds, and that the calibration of forecasts for extreme events can be improved by suitable adaptations to the loss function during model training.
arXiv Detail & Related papers (2025-06-16T16:51:06Z) - Pre-validation Revisited [79.92204034170092]
We show properties and benefits of pre-validation in prediction, inference and error estimation by simulations and applications.<n>We propose not only an analytical distribution of the test statistic for the pre-validated predictor under certain models, but also a generic bootstrap procedure to conduct inference.
arXiv Detail & Related papers (2025-05-21T00:20:14Z) - Robustness investigation of cross-validation based quality measures for model assessment [0.0]
The prediction quality of a machine learning model is evaluated based on a cross-validation approach.
The presented measures quantify the amount of explained variation in the model prediction.
arXiv Detail & Related papers (2024-08-08T11:51:34Z) - Predictability Analysis of Regression Problems via Conditional Entropy Estimations [1.8913544072080544]
Conditional entropy estimators are developed to assess predictability in regression problems.
Experiments on synthesized and real-world datasets demonstrate the robustness and utility of these estimators.
arXiv Detail & Related papers (2024-06-06T07:59:19Z) - Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Prediction model for rare events in longitudinal follow-up and
resampling methods [0.0]
We consider the problem of model building for rare events prediction in longitudinal follow-up studies.
We compare several resampling methods to improve standard regression models on a real life example.
arXiv Detail & Related papers (2023-06-19T14:36:52Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Calibration tests beyond classification [30.616624345970973]
Most supervised machine learning tasks are subject to irreducible prediction errors.
Probabilistic predictive models address this limitation by providing probability distributions that represent a belief over plausible targets.
Calibrated models guarantee that the predictions are neither over- nor under-confident.
arXiv Detail & Related papers (2022-10-21T09:49:57Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - CovarianceNet: Conditional Generative Model for Correct Covariance
Prediction in Human Motion Prediction [71.31516599226606]
We present a new method to correctly predict the uncertainty associated with the predicted distribution of future trajectories.
Our approach, CovariaceNet, is based on a Conditional Generative Model with Gaussian latent variables.
arXiv Detail & Related papers (2021-09-07T09:38:24Z) - Learning Prediction Intervals for Model Performance [1.433758865948252]
We propose a method to compute prediction intervals for model performance.
We evaluate our approach across a wide range of drift conditions and show substantial improvement over competitive baselines.
arXiv Detail & Related papers (2020-12-15T21:32:03Z) - Performance metrics for intervention-triggering prediction models do not
reflect an expected reduction in outcomes from using the model [71.9860741092209]
Clinical researchers often select among and evaluate risk prediction models.
Standard metrics calculated from retrospective data are only related to model utility under certain assumptions.
When predictions are delivered repeatedly throughout time, the relationship between standard metrics and utility is further complicated.
arXiv Detail & Related papers (2020-06-02T16:26:49Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.