Estimating value at risk: LSTM vs. GARCH
- URL: http://arxiv.org/abs/2207.10539v1
- Date: Thu, 21 Jul 2022 15:26:07 GMT
- Title: Estimating value at risk: LSTM vs. GARCH
- Authors: Weronika Ormaniec, Marcin Pitera, Sajad Safarveisi, Thorsten Schmidt
- Abstract summary: We propose a novel value-at-risk estimator using a long short-term memory (LSTM) neural network.
Our results indicate that even for a relatively short time series, the LSTM could be used to refine or monitor risk estimation processes.
We evaluate the estimator on both simulated and market data with a focus on heteroscedasticity, finding that LSTM exhibits a similar performance to GARCH estimators on simulated data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating value-at-risk on time series data with possibly heteroscedastic
dynamics is a highly challenging task. Typically, we face a small data problem
in combination with a high degree of non-linearity, causing difficulties for
both classical and machine-learning estimation algorithms. In this paper, we
propose a novel value-at-risk estimator using a long short-term memory (LSTM)
neural network and compare its performance to benchmark GARCH estimators.
Our results indicate that even for a relatively short time series, the LSTM
could be used to refine or monitor risk estimation processes and correctly
identify the underlying risk dynamics in a non-parametric fashion. We evaluate
the estimator on both simulated and market data with a focus on
heteroscedasticity, finding that LSTM exhibits a similar performance to GARCH
estimators on simulated data, whereas on real market data it is more sensitive
towards increasing or decreasing volatility and outperforms all existing
estimators of value-at-risk in terms of exception rate and mean quantile score.
Related papers
- Survival Models: Proper Scoring Rule and Stochastic Optimization with Competing Risks [6.9648613217501705]
SurvivalBoost outperforms 12 state-of-the-art models on 4 real-life datasets.
It provides great calibration, the ability to predict across any time horizon, and computation times faster than existing methods.
arXiv Detail & Related papers (2024-10-22T07:33:34Z) - Risk and cross validation in ridge regression with correlated samples [72.59731158970894]
We provide training examples for the in- and out-of-sample risks of ridge regression when the data points have arbitrary correlations.
We further extend our analysis to the case where the test point has non-trivial correlations with the training set, setting often encountered in time series forecasting.
We validate our theory across a variety of high dimensional data.
arXiv Detail & Related papers (2024-08-08T17:27:29Z) - On the Performance of Empirical Risk Minimization with Smoothed Data [59.3428024282545]
Empirical Risk Minimization (ERM) is able to achieve sublinear error whenever a class is learnable with iid data.
We show that ERM is able to achieve sublinear error whenever a class is learnable with iid data.
arXiv Detail & Related papers (2024-02-22T21:55:41Z) - Uncertainty-Aware Deep Attention Recurrent Neural Network for
Heterogeneous Time Series Imputation [0.25112747242081457]
Missingness is ubiquitous in multivariate time series and poses an obstacle to reliable downstream analysis.
We propose DEep Attention Recurrent Imputation (Imputation), which jointly estimates missing values and their associated uncertainty.
Experiments show that I surpasses the SOTA in diverse imputation tasks using real-world datasets.
arXiv Detail & Related papers (2024-01-04T13:21:11Z) - Just-In-Time Learning for Operational Risk Assessment in Power Grids [12.939739997360016]
In a grid with a significant share of renewable generation, operators will need additional tools to evaluate the operational risk.
This paper proposes a Just-In-Time Risk Assessment Learning Framework (JITRALF) as an alternative.
JITRALF trains risk surrogates, one for each hour in the day, using Machine Learning (ML) to predict the quantities needed to estimate risk.
arXiv Detail & Related papers (2022-09-26T15:11:27Z) - DeepVol: Volatility Forecasting from High-Frequency Data with Dilated Causal Convolutions [53.37679435230207]
We propose DeepVol, a model based on Dilated Causal Convolutions that uses high-frequency data to forecast day-ahead volatility.
Our empirical results suggest that the proposed deep learning-based approach effectively learns global features from high-frequency data.
arXiv Detail & Related papers (2022-09-23T16:13:47Z) - Long Short-Term Memory Neural Network for Financial Time Series [0.0]
We present an ensemble of independent and parallel long short-term memory neural networks for the prediction of stock price movement.
With a straightforward trading strategy, comparisons with a randomly chosen portfolio and a portfolio containing all the stocks in the index show that the portfolio resulting from the LSTM ensemble provides better average daily returns and higher cumulative returns over time.
arXiv Detail & Related papers (2022-01-20T15:17:26Z) - Doing Great at Estimating CATE? On the Neglected Assumptions in
Benchmark Comparisons of Treatment Effect Estimators [91.3755431537592]
We show that even in arguably the simplest setting, estimation under ignorability assumptions can be misleading.
We consider two popular machine learning benchmark datasets for evaluation of heterogeneous treatment effect estimators.
We highlight that the inherent characteristics of the benchmark datasets favor some algorithms over others.
arXiv Detail & Related papers (2021-07-28T13:21:27Z) - Risk Minimization from Adaptively Collected Data: Guarantees for
Supervised and Policy Learning [57.88785630755165]
Empirical risk minimization (ERM) is the workhorse of machine learning, but its model-agnostic guarantees can fail when we use adaptively collected data.
We study a generic importance sampling weighted ERM algorithm for using adaptively collected data to minimize the average of a loss function over a hypothesis class.
For policy learning, we provide rate-optimal regret guarantees that close an open gap in the existing literature whenever exploration decays to zero.
arXiv Detail & Related papers (2021-06-03T09:50:13Z) - Prediction of financial time series using LSTM and data denoising
methods [0.29923891863939933]
This paper proposes an ensemble method based on data denoising methods, including the wavelet transform (WT) and singular spectrum analysis (SSA)
As WT and SSA can extract useful information from the original sequence and avoid overfitting, the hybrid model can better grasp the sequence pattern of the closing price of the DJIA.
arXiv Detail & Related papers (2021-03-05T07:32:36Z) - Learning from Similarity-Confidence Data [94.94650350944377]
We investigate a novel weakly supervised learning problem of learning from similarity-confidence (Sconf) data.
We propose an unbiased estimator of the classification risk that can be calculated from only Sconf data and show that the estimation error bound achieves the optimal convergence rate.
arXiv Detail & Related papers (2021-02-13T07:31:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.