Generalized Autoregressive Score Trees and Forests
- URL: http://arxiv.org/abs/2305.18991v1
- Date: Tue, 30 May 2023 12:41:52 GMT
- Title: Generalized Autoregressive Score Trees and Forests
- Authors: Andrew J. Patton and Yasin Simsek
- Abstract summary: We propose methods to improve the forecasts from generalized autoregressive score (GAS) models by localizing their parameters using decision trees and random forests.
In our applications to stock return volatility and density prediction, the optimal GAS tree model reveals a leverage effect and a variance risk premium effect.
Our study of stock-bond dependence finds evidence of a flight-to-quality effect in the optimal GAS forest forecasts, while our analysis of high-frequency trade durations uncovers a volume-volatility effect.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose methods to improve the forecasts from generalized autoregressive
score (GAS) models (Creal et. al, 2013; Harvey, 2013) by localizing their
parameters using decision trees and random forests. These methods avoid the
curse of dimensionality faced by kernel-based approaches, and allow one to draw
on information from multiple state variables simultaneously. We apply the new
models to four distinct empirical analyses, and in all applications the
proposed new methods significantly outperform the baseline GAS model. In our
applications to stock return volatility and density prediction, the optimal GAS
tree model reveals a leverage effect and a variance risk premium effect. Our
study of stock-bond dependence finds evidence of a flight-to-quality effect in
the optimal GAS forest forecasts, while our analysis of high-frequency trade
durations uncovers a volume-volatility effect.
Related papers
- Bayesian Models for Joint Selection of Features and Auto-Regressive Lags: Theory and Applications in Environmental and Financial Forecasting [0.9208007322096533]
We develop a Bayesian framework for variable selection in linear regression with autocorrelated errors.<n>Our framework achieves lower MSPE, improved true model component identification, and greater consistency with autocorrelated noise.<n>Compared to existing methods, our framework achieves lower MSPE, improved true model component identification, and greater consistency with autocorrelated noise.
arXiv Detail & Related papers (2025-08-12T18:44:36Z) - Self-Boost via Optimal Retraining: An Analysis via Approximate Message Passing [58.52119063742121]
Retraining a model using its own predictions together with the original, potentially noisy labels is a well-known strategy for improving the model performance.<n>This paper addresses the question of how to optimally combine the model's predictions and the provided labels.<n>Our main contribution is the derivation of the Bayes optimal aggregator function to combine the current model's predictions and the given labels.
arXiv Detail & Related papers (2025-05-21T07:16:44Z) - Diffusion models for probabilistic precipitation generation from atmospheric variables [1.6099193327384094]
In Earth system models (ESMs), precipitation is not resolved explicitly, but represented by parameterizations.
We present a novel approach, based on generative machine learning, which integrates a conditional diffusion model with a UNet architecture.
Unlike traditional parameterizations, our framework efficiently produces ensemble predictions, capturing uncertainties in precipitation, and does not require fine-tuning by hand.
arXiv Detail & Related papers (2025-04-01T00:21:31Z) - Towards Hybrid Embedded Feature Selection and Classification Approach with Slim-TSF [0.0]
This study aims to uncover hidden relationships and the evolutionary characteristics of solar flares and their source regions.
Preliminary findings indicate a notable improvement, with an average increase of 5% in both the True Skill Statistic (TSS) and Heidke Skill Score (HSS)
arXiv Detail & Related papers (2024-09-06T18:12:05Z) - Ensembles of Probabilistic Regression Trees [46.53457774230618]
Tree-based ensemble methods have been successfully used for regression problems in many applications and research studies.
We study ensemble versions of probabilisticregression trees that provide smooth approximations of the objective function by assigningeach observation to each region with respect to a probability distribution.
arXiv Detail & Related papers (2024-06-20T06:51:51Z) - A Natural Gas Consumption Forecasting System for Continual Learning Scenarios based on Hoeffding Trees with Change Point Detection Mechanism [3.664183482252307]
This article introduces a novel multistep ahead forecasting of natural gas consumption with change point detection integration.
The performance of the forecasting models based on the proposed approach is evaluated in a complex real-world use case.
arXiv Detail & Related papers (2023-09-07T13:52:20Z) - Model-based Causal Bayesian Optimization [74.78486244786083]
We introduce the first algorithm for Causal Bayesian Optimization with Multiplicative Weights (CBO-MW)
We derive regret bounds for CBO-MW that naturally depend on graph-related quantities.
Our experiments include a realistic demonstration of how CBO-MW can be used to learn users' demand patterns in a shared mobility system.
arXiv Detail & Related papers (2023-07-31T13:02:36Z) - Ensemble Modeling for Time Series Forecasting: an Adaptive Robust
Optimization Approach [3.7565501074323224]
This paper proposes a new methodology for building robust ensembles of time series forecasting models.
We demonstrate the effectiveness of our method through a series of synthetic experiments and real-world applications.
arXiv Detail & Related papers (2023-04-09T20:30:10Z) - On the Effectiveness of Parameter-Efficient Fine-Tuning [79.6302606855302]
Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks.
We show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them.
Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters.
arXiv Detail & Related papers (2022-11-28T17:41:48Z) - Adaptive LASSO estimation for functional hidden dynamic geostatistical
model [69.10717733870575]
We propose a novel model selection algorithm based on a penalized maximum likelihood estimator (PMLE) for functional hiddenstatistical models (f-HD)
The algorithm is based on iterative optimisation and uses an adaptive least absolute shrinkage and selector operator (GMSOLAS) penalty function, wherein the weights are obtained by the unpenalised f-HD maximum-likelihood estimators.
arXiv Detail & Related papers (2022-08-10T19:17:45Z) - Distributional Gradient Boosting Machines [77.34726150561087]
Our framework is based on XGBoost and LightGBM.
We show that our framework achieves state-of-the-art forecast accuracy.
arXiv Detail & Related papers (2022-04-02T06:32:19Z) - On Uncertainty Estimation by Tree-based Surrogate Models in Sequential
Model-based Optimization [13.52611859628841]
We revisit various ensembles of randomized trees to investigate their behavior in the perspective of prediction uncertainty estimation.
We propose a new way of constructing an ensemble of randomized trees, referred to as BwO forest, where bagging with oversampling is employed to construct bootstrapped samples.
Experimental results demonstrate the validity and good performance of BwO forest over existing tree-based models in various circumstances.
arXiv Detail & Related papers (2022-02-22T04:50:37Z) - Back2Future: Leveraging Backfill Dynamics for Improving Real-time
Predictions in Future [73.03458424369657]
In real-time forecasting in public health, data collection is a non-trivial and demanding task.
'Backfill' phenomenon and its effect on model performance has been barely studied in the prior literature.
We formulate a novel problem and neural framework Back2Future that aims to refine a given model's predictions in real-time.
arXiv Detail & Related papers (2021-06-08T14:48:20Z) - Sparse Bayesian Causal Forests for Heterogeneous Treatment Effects
Estimation [0.0]
This paper develops a sparsity-inducing version of Bayesian Causal Forests.
It is designed to estimate heterogeneous treatment effects using observational data.
arXiv Detail & Related papers (2021-02-12T15:24:50Z) - Learnable Bernoulli Dropout for Bayesian Deep Learning [53.79615543862426]
Learnable Bernoulli dropout (LBD) is a new model-agnostic dropout scheme that considers the dropout rates as parameters jointly optimized with other model parameters.
LBD leads to improved accuracy and uncertainty estimates in image classification and semantic segmentation.
arXiv Detail & Related papers (2020-02-12T18:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.