Estimating the Value-at-Risk by Temporal VAE
- URL: http://arxiv.org/abs/2112.01896v1
- Date: Fri, 3 Dec 2021 13:20:41 GMT
- Title: Estimating the Value-at-Risk by Temporal VAE
- Authors: Robert Sicks, Stefanie Grimm, Ralf Korn, Ivo Richert
- Abstract summary: estimation of the value-at-risk (VaR) of a large portfolio of assets is an important task for financial institutions.
We use a temporal VAE (TempVAE) that avoids an auto-regressive structure for the observation variables.
As a result, the auto-pruning of the TempVAE works properly which also results in excellent estimation results for the VaR that beats classical GARCH-type and historical simulation approaches.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Estimation of the value-at-risk (VaR) of a large portfolio of assets is an
important task for financial institutions. As the joint log-returns of asset
prices can often be projected to a latent space of a much smaller dimension,
the use of a variational autoencoder (VAE) for estimating the VaR is a natural
suggestion. To ensure the bottleneck structure of autoencoders when learning
sequential data, we use a temporal VAE (TempVAE) that avoids an auto-regressive
structure for the observation variables. However, the low signal- to-noise
ratio of financial data in combination with the auto-pruning property of a VAE
typically makes the use of a VAE prone to posterior collapse. Therefore, we
propose to use annealing of the regularization to mitigate this effect. As a
result, the auto-pruning of the TempVAE works properly which also results in
excellent estimation results for the VaR that beats classical GARCH-type and
historical simulation approaches when applied to real data.
Related papers
- $V_0$: A Generalist Value Model for Any Policy at State Zero [80.7505802128501]
Policy methods rely on a baseline to measure the relative advantage of an action.<n>This baseline is typically estimated by a Value Model (Critic) often as large as the policy model itself.<n>We propose a Generalist Value Model capable of estimating the expected performance of any model on unseen prompts.
arXiv Detail & Related papers (2026-02-03T14:35:23Z) - OPV: Outcome-based Process Verifier for Efficient Long Chain-of-Thought Verification [91.15649744496834]
We propose the Outcome-based Process Verifier (OPV), which verifies the rationale process of summarized outcomes from long chains of thought.<n>OPV achieves new state-of-the-art results on our held-out OPV-Bench, outperforming much larger open-source models such as Qwen3-Max-Preview with an F1 score of 83.1 compared to 76.3.
arXiv Detail & Related papers (2025-12-11T15:47:38Z) - Auto-bidding under Return-on-Spend Constraints with Uncertainty Quantification [11.402112814133034]
Auto-bidding systems are widely used in advertising to automatically determine bid values under constraints such as total budget and Return-on-Spend (RoS) targets.<n>This paper considers the more realistic scenario where the true value is unknown.<n>We propose a novel method that uses conformal prediction to quantify the uncertainty of these values based on machine learning methods trained on historical bidding data with contextual features.
arXiv Detail & Related papers (2025-09-19T18:09:23Z) - Efficient Estimation of Regularized Tyler's M-Estimator Using Approximate LOOCV [0.0]
We consider the problem of estimating a regularization parameter, or a shrinkage coefficient $alpha in (0,1)$ for Regularized Tyler's M-estimator (RTME)<n>We propose to estimate an optimal shrinkage coefficient by setting $alpha$ as the solution to a suitably chosen objective function.<n>Our experiments show that the proposed approach is efficient and consistently more accurate than other methods in the literature for shrinkage coefficient estimation.
arXiv Detail & Related papers (2025-05-30T16:43:14Z) - Variational Rank Reduction Autoencoder [1.3980986259786223]
We present Variational Rank Reduction Autoencoders (VRRAEs) a model that leverages the advantages of both RRAEs and VAEs.<n>Our results include a synthetic dataset of a small size that showcases the robustness of VRRAEs against collapse, and three real-world datasets.
arXiv Detail & Related papers (2025-05-14T15:08:28Z) - DUPRE: Data Utility Prediction for Efficient Data Valuation [49.60564885180563]
Cooperative game theory-based data valuation, such as Data Shapley, requires evaluating the data utility and retraining the ML model for multiple data subsets.
Our framework, textttDUPRE, takes an alternative yet complementary approach that reduces the cost per subset evaluation by predicting data utilities instead of evaluating them by model retraining.
Specifically, given the evaluated data utilities of some data subsets, textttDUPRE fits a emphGaussian process (GP) regression model to predict the utility of every other data subset.
arXiv Detail & Related papers (2025-02-22T08:53:39Z) - Data value estimation on private gradients [84.966853523107]
For gradient-based machine learning (ML) methods, the de facto differential privacy technique is perturbing the gradients with random noise.
Data valuation attributes the ML performance to the training data and is widely used in privacy-aware applications that require enforcing DP.
We show that the answer is no with the default approach of injecting i.i.d.random noise to the gradients because the estimation uncertainty of the data value estimation paradoxically linearly scales with more estimation budget.
We propose to instead inject carefully correlated noise to provably remove the linear scaling of estimation uncertainty w.r.t.the budget.
arXiv Detail & Related papers (2024-12-22T13:15:51Z) - Time-Series Foundation Model for Value-at-Risk [9.090616417812306]
Foundation models, pre-trained on vast and varied datasets, can be used in a zero-shot setting with relatively minimal data.
We compare the performance of Google's model, called TimesFM, against conventional parametric and non-parametric models.
arXiv Detail & Related papers (2024-10-15T16:53:44Z) - Stock Volume Forecasting with Advanced Information by Conditional Variational Auto-Encoder [49.97673761305336]
We demonstrate the use of Conditional Variational (CVAE) to improve the forecasts of daily stock volume time series in both short and long term forecasting tasks.
CVAE generates non-linear time series as out-of-sample forecasts, which have better accuracy and closer fit of correlation to the actual data.
arXiv Detail & Related papers (2024-06-19T13:13:06Z) - Causal Contrastive Learning for Counterfactual Regression Over Time [3.3523758554338734]
This paper introduces a unique approach to counterfactual regression over time, emphasizing long-term predictions.
Distinguishing itself from existing models like Causal Transformer, our approach highlights the efficacy of employing RNNs for long-term forecasting.
Our method achieves state-of-the-art counterfactual estimation results using both synthetic and real-world data.
arXiv Detail & Related papers (2024-06-01T19:07:25Z) - Contextual Linear Optimization with Bandit Feedback [35.692428244561626]
Contextual linear optimization (CLO) uses predictive contextual features to reduce uncertainty in random cost coefficients.
We study a class of offline learning algorithms for CLO with bandit feedback.
We show a fast-rate regret bound for IERM that allows for misspecified model classes and flexible choices of the optimization estimate.
arXiv Detail & Related papers (2024-05-26T13:27:27Z) - Matching aggregate posteriors in the variational autoencoder [0.5759862457142761]
The variational autoencoder (VAE) is a well-studied, deep, latent-variable model (DLVM)
This paper addresses shortcomings in VAEs by reformulating the objective function associated with VAEs in order to match the aggregate/marginal posterior distribution to the prior.
The proposed method is named the emphaggregate variational autoencoder (AVAE) and is built on the theoretical framework of the VAE.
arXiv Detail & Related papers (2023-11-13T19:22:37Z) - Consensus-Adaptive RANSAC [104.87576373187426]
We propose a new RANSAC framework that learns to explore the parameter space by considering the residuals seen so far via a novel attention layer.
The attention mechanism operates on a batch of point-to-model residuals, and updates a per-point estimation state to take into account the consensus found through a lightweight one-step transformer.
arXiv Detail & Related papers (2023-07-26T08:25:46Z) - Value function estimation using conditional diffusion models for control [62.27184818047923]
We propose a simple algorithm called Diffused Value Function (DVF)
It learns a joint multi-step model of the environment-robot interaction dynamics using a diffusion model.
We show how DVF can be used to efficiently capture the state visitation measure for multiple controllers.
arXiv Detail & Related papers (2023-06-09T18:40:55Z) - DeepVol: Volatility Forecasting from High-Frequency Data with Dilated Causal Convolutions [53.37679435230207]
We propose DeepVol, a model based on Dilated Causal Convolutions that uses high-frequency data to forecast day-ahead volatility.
Our empirical results suggest that the proposed deep learning-based approach effectively learns global features from high-frequency data.
arXiv Detail & Related papers (2022-09-23T16:13:47Z) - Learning Conditional Variational Autoencoders with Missing Covariates [0.8563354084119061]
Conditional variational autoencoders (CVAEs) are versatile deep generative models.
We develop computationally efficient methods to learn CVAEs and GP prior VAEs.
Our experiments on simulated datasets as well as on a clinical trial study show that the proposed method outperforms previous methods.
arXiv Detail & Related papers (2022-03-02T16:22:09Z) - Provably Efficient Causal Reinforcement Learning with Confounded
Observational Data [135.64775986546505]
We study how to incorporate the dataset (observational data) collected offline, which is often abundantly available in practice, to improve the sample efficiency in the online setting.
We propose the deconfounded optimistic value iteration (DOVI) algorithm, which incorporates the confounded observational data in a provably efficient manner.
arXiv Detail & Related papers (2020-06-22T14:49:33Z) - Momentum Improves Normalized SGD [51.27183254738711]
We show that adding momentum provably removes the need for large batch sizes on objectives.
We show that our method is effective when employed on popular large scale tasks such as ResNet-50 and BERT pretraining.
arXiv Detail & Related papers (2020-02-09T07:00:54Z) - Detecting Changes in Asset Co-Movement Using the Autoencoder
Reconstruction Ratio [5.5616364225463055]
We propose a real-time indicator to detect temporary increases in asset co-movements.
The Autoencoder Reconstruction Ratio measures how well a basket of asset returns can be modelled using a lower-dimensional set of latent variables.
arXiv Detail & Related papers (2020-01-23T22:33:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.