Graph Neural Networks for Forecasting Multivariate Realized Volatility
with Spillover Effects
- URL: http://arxiv.org/abs/2308.01419v1
- Date: Tue, 1 Aug 2023 14:39:03 GMT
- Title: Graph Neural Networks for Forecasting Multivariate Realized Volatility
with Spillover Effects
- Authors: Chao Zhang, Xingyue Pu, Mihai Cucuringu, Xiaowen Dong
- Abstract summary: The proposed model offers the benefits of incorporating spillover effects from multi-hop neighbors, capturing nonlinear relationships, and flexible training with different loss functions.
Our results consistently indicate that training with the Quasi-likelihood loss leads to substantial improvements in model performance.
- Score: 16.260673340556135
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We present a novel methodology for modeling and forecasting multivariate
realized volatilities using customized graph neural networks to incorporate
spillover effects across stocks. The proposed model offers the benefits of
incorporating spillover effects from multi-hop neighbors, capturing nonlinear
relationships, and flexible training with different loss functions. Our
empirical findings provide compelling evidence that incorporating spillover
effects from multi-hop neighbors alone does not yield a clear advantage in
terms of predictive accuracy. However, modeling nonlinear spillover effects
enhances the forecasting accuracy of realized volatilities, particularly for
short-term horizons of up to one week. Moreover, our results consistently
indicate that training with the Quasi-likelihood loss leads to substantial
improvements in model performance compared to the commonly-used mean squared
error. A comprehensive series of empirical evaluations in alternative settings
confirm the robustness of our results.
Related papers
- Same Error, Different Function: The Optimizer as an Implicit Prior in Financial Time Series [0.5405981353784005]
We show that different model-training-pipeline pairs with identical test loss learn qualitatively different functions.<n>We conclude that in underspecified settings, optimization acts as a consequential source of inductive bias.
arXiv Detail & Related papers (2026-03-03T05:47:19Z) - Abstain Mask Retain Core: Time Series Prediction by Adaptive Masking Loss with Representation Consistency [4.047219770183742]
Time series forecasting plays a pivotal role in critical domains such as energy management and financial markets.<n>This study reveals a counterintuitive phenomenon: appropriately truncating historical data can enhance prediction accuracy.<n>We propose an innovative solution termed Adaptive Masking Loss with Representation Consistency.
arXiv Detail & Related papers (2025-10-22T19:23:53Z) - Interpretable Deep Regression Models with Interval-Censored Failure Time Data [1.2993568435938014]
Deep learning methods for interval-censored data remain underexplored and limited to specific data type or model.
This work proposes a general regression framework for interval-censored data with a broad class of partially linear transformation models.
Applying our method to the Alzheimer's Disease Neuroimaging Initiative dataset yields novel insights and improved predictive performance compared to traditional approaches.
arXiv Detail & Related papers (2025-03-25T15:27:32Z) - Dissecting Representation Misalignment in Contrastive Learning via Influence Function [15.28417468377201]
We introduce the Extended Influence Function for Contrastive Loss (ECIF), an influence function crafted for contrastive loss.
ECIF considers both positive and negative samples and provides a closed-form approximation of contrastive learning models.
Building upon ECIF, we develop a series of algorithms for data evaluation, misalignment detection, and misprediction trace-back tasks.
arXiv Detail & Related papers (2024-11-18T15:45:41Z) - Learning Latent Graph Structures and their Uncertainty [63.95971478893842]
Graph Neural Networks (GNNs) use relational information as an inductive bias to enhance the model's accuracy.
As task-relevant relations might be unknown, graph structure learning approaches have been proposed to learn them while solving the downstream prediction task.
arXiv Detail & Related papers (2024-05-30T10:49:22Z) - From Reactive to Proactive Volatility Modeling with Hemisphere Neural Networks [0.0]
We reinvigorate maximum likelihood estimation (MLE) for macroeconomic density forecasting through a novel neural network architecture with dedicated mean and variance hemispheres.
Our Hemisphere Neural Network (HNN) provides proactive volatility forecasts based on leading indicators when it can, and reactive volatility based on the magnitude of previous prediction errors when it must.
arXiv Detail & Related papers (2023-11-27T21:37:50Z) - Spurious Feature Diversification Improves Out-of-distribution Generalization [43.84284578270031]
Generalization to out-of-distribution (OOD) data is a critical challenge in machine learning.
We study WiSE-FT, a popular weight space ensemble method that interpolates between a pre-trained and a fine-tuned model.
We observe an unexpected FalseFalseTrue" phenomenon, in which WiSE-FT successfully corrects many cases where each individual model makes incorrect predictions.
arXiv Detail & Related papers (2023-09-29T13:29:22Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - On the Efficacy of Generalization Error Prediction Scoring Functions [33.24980750651318]
Generalization error predictors (GEPs) aim to predict model performance on unseen distributions by deriving dataset-level error estimates from sample-level scores.
We rigorously study the effectiveness of popular scoring functions (confidence, local manifold smoothness, model agreement) independent of mechanism choice.
arXiv Detail & Related papers (2023-03-23T18:08:44Z) - Toward Robust Uncertainty Estimation with Random Activation Functions [3.0586855806896045]
We propose a novel approach for uncertainty quantification via ensembles, called Random Activation Functions (RAFs) Ensemble.
RAFs Ensemble outperforms state-of-the-art ensemble uncertainty quantification methods on both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-02-28T13:17:56Z) - Churn Reduction via Distillation [54.5952282395487]
We show an equivalence between training with distillation using the base model as the teacher and training with an explicit constraint on the predictive churn.
We then show that distillation performs strongly for low churn training against a number of recent baselines.
arXiv Detail & Related papers (2021-06-04T18:03:31Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z) - Influence Functions in Deep Learning Are Fragile [52.31375893260445]
influence functions approximate the effect of samples in test-time predictions.
influence estimates are fairly accurate for shallow networks.
Hessian regularization is important to get highquality influence estimates.
arXiv Detail & Related papers (2020-06-25T18:25:59Z) - On the Benefits of Invariance in Neural Networks [56.362579457990094]
We show that training with data augmentation leads to better estimates of risk and thereof gradients, and we provide a PAC-Bayes generalization bound for models trained with data augmentation.
We also show that compared to data augmentation, feature averaging reduces generalization error when used with convex losses, and tightens PAC-Bayes bounds.
arXiv Detail & Related papers (2020-05-01T02:08:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.