The Limits of Conditional Volatility: Assessing Cryptocurrency VaR under EWMA and IGARCH Models
- URL: http://arxiv.org/abs/2601.13757v1
- Date: Tue, 20 Jan 2026 09:11:24 GMT
- Title: The Limits of Conditional Volatility: Assessing Cryptocurrency VaR under EWMA and IGARCH Models
- Authors: Ekleen Kaur,
- Abstract summary: The application of the standard static Geometric Brownian Motion (GBM) model for cryptocurrency risk management resulted in a systemic failure.<n>This study addresses a critical literature gap by comparatively testing three conditional volatility models the EWMA/IGARCH baseline, an IGARCH model augmented with explicit mean reversion (IGARCH + MR), and a modified EGARCH-style asymmetric shock model within a correlated Monte Carlo VaR framework.<n>Our results demonstrate that imposing stationarity drastically underestimates downside risk (5 percent value-at-risk reduced by 50%), while the asymmetric model (Model 3) leads to severe over-penalization.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The application of the standard static Geometric Brownian Motion (GBM) model for cryptocurrency risk management resulted in a systemic failure, evidenced by a 80.67% chance of loss in the 5% value-at-risk benchmark. This study addresses a critical literature gap by comparatively testing three conditional volatility models the EWMA/IGARCH baseline, an IGARCH model augmented with explicit mean reversion (IGARCH + MR), and a modified EGARCH-style asymmetric shock model within a correlated Monte Carlo VaR framework. Crucially, the analysis is applied specifically to high-beta altcoins (XRP, SOL, ADA), an asset class largely neglected by mainstream GARCH literature. Our results demonstrate that imposing stationarity (IGARCH + MR) drastically underestimates downside risk (5 percent value-at-risk reduced by 50%), while the asymmetric model (Model 3) leads to severe over-penalization. The EWMA/IGARCH baseline, characterized by infinite volatility persistence (alpha + beta = 1), provided the only robust conditional volatility estimate. This finding constitutes a formal rejection of the conventional financial hypotheses of volatility mean reversion and the asymmetric leverage effect in the altcoin asset class, establishing that non-stationary frameworks are a prerequisite for regulatory-grade risk modeling in this domain.
Related papers
- Mitigating Reward Hacking in RLHF via Bayesian Non-negative Reward Modeling [49.41422138354821]
We propose a principled reward modeling framework that integrates non-negative factor analysis into the Bradley-Terry preference model.<n>BNRM represents rewards through a sparse, non-negative latent factor generative process.<n>We show that BNRM substantially mitigates reward over-optimization, improves robustness under distribution shifts, and yields more interpretable reward decompositions than strong baselines.
arXiv Detail & Related papers (2026-02-11T08:14:11Z) - The Limits of Lognormal: Assessing Cryptocurrency Volatility and VaR using Geometric Brownian Motion [0.0]
This study is a part of a series of subsequent works to fine-tune model risk analysis for cryptocurrencies.<n>We establish a foundational benchmark by applying the traditional industry-standard Geometric Brownian Motion (GBM) model.<n>Results reveal limitations of the Lognormal assumption: the calculated Value-at-Risk at the 5% confidence level over the one-year horizon.
arXiv Detail & Related papers (2026-01-09T05:14:16Z) - Robust Reinforcement Learning in Finance: Modeling Market Impact with Elliptic Uncertainty Sets [57.179679246370114]
In financial applications, reinforcement learning (RL) agents are commonly trained on historical data, where their actions do not influence prices.<n>During deployment, these agents trade in live markets where their own transactions can shift asset prices, a phenomenon known as market impact.<n>Traditional robust RL approaches address this model misspecification by optimizing the worst-case performance over a set of uncertainties.<n>We develop a novel class of elliptic uncertainty sets, enabling efficient and tractable robust policy evaluation.
arXiv Detail & Related papers (2025-10-22T18:22:25Z) - CoRA: Covariate-Aware Adaptation of Time Series Foundation Models [47.20786327020571]
Time Series Foundation Models (TSFMs) have shown significant impact through their model capacity, scalability, and zero-shot generalizations.<n>We propose a general covariate-aware adaptation (CoRA) framework for TSFMs.
arXiv Detail & Related papers (2025-10-14T16:20:00Z) - Conformal Risk Training: End-to-End Optimization of Conformal Risk Control [41.45834526675908]
We introduce "conformal risk training," an end-to-end approach that differentiates through conformal OCE risk control during model training or fine-tuning.<n>Our method achieves provable risk guarantees while demonstrating significantly improved average-case performance over post-hoc approaches.
arXiv Detail & Related papers (2025-10-09T19:05:45Z) - Reinforcement Learning from Probabilistic Forecasts for Safe Decision-Making via Conditional Value-at-Risk Planning [41.52380204321823]
This paper presents the Uncertainty-Aware Markov Decision Process (UAMDP), a unified framework that couples Bayesian forecasting, posterior-sampling reinforcement learning, and planning.<n>We evaluate UAMDP in two domains-high-frequency equity trading and retail inventory control-both marked by structural uncertainty and economic volatility.
arXiv Detail & Related papers (2025-10-09T13:46:32Z) - MARS: A Meta-Adaptive Reinforcement Learning Framework for Risk-Aware Multi-Agent Portfolio Management [7.740995234462868]
Reinforcement Learning has shown significant promise in automated portfolio management.<n>We propose Meta-controlled Agents for a Risk-aware System (MARS)<n>MARS employs a Heterogeneous Agent Ensemble where each agent possesses a unique, intrinsic risk profile.
arXiv Detail & Related papers (2025-08-02T03:23:41Z) - One Token to Fool LLM-as-a-Judge [52.45386385722788]
Large language models (LLMs) are increasingly trusted as automated judges, assisting evaluation and providing reward signals for training other models.<n>We uncover a critical vulnerability even in this reference-based paradigm: generative reward models are systematically susceptible to reward hacking.
arXiv Detail & Related papers (2025-07-11T17:55:22Z) - Predicting Stock Market Crash with Bayesian Generalised Pareto Regression [0.0]
Extreme negative returns, though rare, can cause significant financial disruption.<n>This paper develops a Bayesian Generalised Pareto Regression model to forecast extreme losses in Indian equity markets.
arXiv Detail & Related papers (2025-06-21T02:36:05Z) - Enhancing Black-Litterman Portfolio via Hybrid Forecasting Model Combining Multivariate Decomposition and Noise Reduction [13.04801847533423]
We propose a novel hybrid forecasting model SSA-MAEMD-TCN to automate and improve the view generation process.<n> Empirical tests on the Nasdaq 100 Index stocks show a significant improvement in forecasting performance compared to baseline models.<n>The optimized portfolio performs well, with annualized returns and Sharpe ratios far exceeding those of the traditional portfolio.
arXiv Detail & Related papers (2025-05-03T10:52:57Z) - Bridging Econometrics and AI: VaR Estimation via Reinforcement Learning and GARCH Models [0.0]
We propose a hybrid framework for Value-at-Risk (VaR) estimation, combining GARCH volatility models with deep reinforcement learning.<n>Our approach incorporates directional market forecasting using the Double Deep Q-Network (DDQN) model, treating the task as an imbalanced classification problem.<n> Empirical validation on daily Eurostoxx 50 data covering periods of crisis and high volatility shows a significant improvement in the accuracy of VaR estimates.
arXiv Detail & Related papers (2025-04-23T11:54:22Z) - Retrieval is Not Enough: Enhancing RAG Reasoning through Test-Time Critique and Optimization [58.390885294401066]
Retrieval-augmented generation (RAG) has become a widely adopted paradigm for enabling knowledge-grounded large language models (LLMs)<n>RAG pipelines often fail to ensure that model reasoning remains consistent with the evidence retrieved, leading to factual inconsistencies or unsupported conclusions.<n>We propose AlignRAG, a novel iterative framework grounded in Critique-Driven Alignment (CDA)<n>We introduce AlignRAG-auto, an autonomous variant that dynamically terminates refinement, removing the need to pre-specify the number of critique iterations.
arXiv Detail & Related papers (2025-04-21T04:56:47Z) - Combining Deep Learning and GARCH Models for Financial Volatility and
Risk Forecasting [0.0]
We develop a hybrid approach to forecasting the volatility and risk of financial instruments by combining common econometric GARCH time series models with deep learning neural networks.
For the latter, we employ Gated Recurrent Unit (GRU) networks, whereas four different specifications are used as the GARCH component: standard GARCH, EGARCH, GJR-GARCH and APARCH.
Models are tested using daily logarithmic returns on the S&P 500 index as well as gold price Bitcoin prices.
arXiv Detail & Related papers (2023-10-02T10:18:13Z) - HireVAE: An Online and Adaptive Factor Model Based on Hierarchical and
Regime-Switch VAE [113.47287249524008]
It is still an open question to build a factor model that can conduct stock prediction in an online and adaptive setting.
We propose the first deep learning based online and adaptive factor model, HireVAE, at the core of which is a hierarchical latent space that embeds the relationship between the market situation and stock-wise latent factors.
Across four commonly used real stock market benchmarks, the proposed HireVAE demonstrate superior performance in terms of active returns over previous methods.
arXiv Detail & Related papers (2023-06-05T12:58:13Z) - Providing reliability in Recommender Systems through Bernoulli Matrix
Factorization [63.732639864601914]
This paper proposes Bernoulli Matrix Factorization (BeMF) to provide both prediction values and reliability values.
BeMF acts on model-based collaborative filtering rather than on memory-based filtering.
The more reliable a prediction is, the less liable it is to be wrong.
arXiv Detail & Related papers (2020-06-05T14:24:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.