Forecasting Probability Distributions of Financial Returns with Deep Neural Networks
- URL: http://arxiv.org/abs/2508.18921v2
- Date: Sat, 30 Aug 2025 12:37:18 GMT
- Title: Forecasting Probability Distributions of Financial Returns with Deep Neural Networks
- Authors: Jakub Michańków,
- Abstract summary: CNN and Long Short-Term Memory are used to forecast parameters of three probability distributions: Normal, Student's t, and skewed Student's t.<n>The models are tested on six major equity indices (S&P 500, BOVESPA, DAX, WIG, Nikkei 225, and KOSPI)<n>Results show that deep learning models provide accurate distributional forecasts and perform competitively with classical GARCH models for Value-at-Risk estimation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This study evaluates deep neural networks for forecasting probability distributions of financial returns. 1D convolutional neural networks (CNN) and Long Short-Term Memory (LSTM) architectures are used to forecast parameters of three probability distributions: Normal, Student's t, and skewed Student's t. Using custom negative log-likelihood loss functions, distribution parameters are optimized directly. The models are tested on six major equity indices (S\&P 500, BOVESPA, DAX, WIG, Nikkei 225, and KOSPI) using probabilistic evaluation metrics including Log Predictive Score (LPS), Continuous Ranked Probability Score (CRPS), and Probability Integral Transform (PIT). Results show that deep learning models provide accurate distributional forecasts and perform competitively with classical GARCH models for Value-at-Risk estimation. The LSTM with skewed Student's t distribution performs best across multiple evaluation criteria, capturing both heavy tails and asymmetry in financial returns. This work shows that deep neural networks are viable alternatives to traditional econometric models for financial risk assessment and portfolio management.
Related papers
- d-TreeRPO: Towards More Reliable Policy Optimization for Diffusion Language Models [45.27333046908981]
emphd-TreeRPO is a reliable reinforcement learning framework for dLLMs.<n>We show that emphd-TreeRPO achieves significant gains on multiple reasoning benchmarks.
arXiv Detail & Related papers (2025-12-10T14:20:07Z) - DL101 Neural Network Outputs and Loss Functions [51.77969450792284]
Loss function used to train a neural network is strongly connected to its output layer from a statistical point of view.<n>Report analyzes common activation functions for a neural network output layer, like linear, sigmoid, ReLU, and softmax.
arXiv Detail & Related papers (2025-11-07T10:20:45Z) - NBMLSS: probabilistic forecasting of electricity prices via Neural Basis Models for Location Scale and Shape [44.99833362998488]
We deploy a Neural Basis Model for Location, Scale and Shape, that blends the principled interpretability of GAMLSS with a computationally scalable shared basis decomposition.<n>Experiments have been conducted on multiple market regions, achieving probabilistic forecasting performance comparable to that of distributional neural networks.
arXiv Detail & Related papers (2024-11-21T08:17:53Z) - Generalized Distribution Prediction for Asset Returns [0.9944647907864256]
We present a novel approach for predicting the distribution of asset returns using a quantile-based method with Long Short-Term Memory (LSTM) networks.<n>Our model is designed in two stages: the first focuses on predicting the quantiles of normalized asset returns using asset-specific features, while the second stage incorporates market data to adjust these predictions for broader economic conditions.
arXiv Detail & Related papers (2024-10-15T15:31:44Z) - A Probabilistic Perspective on Unlearning and Alignment for Large Language Models [48.96686419141881]
We introduce the first formal probabilistic evaluation framework for Large Language Models (LLMs)<n> Namely, we propose novel metrics with high probability guarantees concerning the output distribution of a model.<n>Our metrics are application-independent and allow practitioners to make more reliable estimates about model capabilities before deployment.
arXiv Detail & Related papers (2024-10-04T15:44:23Z) - GARCH-Informed Neural Networks for Volatility Prediction in Financial Markets [0.0]
We present a new, hybrid Deep Learning model that captures and forecasting market volatility more accurately than either class of models are capable of on their own.
When compared to other time series models, GINN showed superior out-of-sample prediction performance in terms of the Coefficient of Determination ($R2$), Mean Squared Error (MSE), and Mean Absolute Error (MAE)
arXiv Detail & Related papers (2024-09-30T23:53:54Z) - Uncertainty Quantification in Multivariable Regression for Material Property Prediction with Bayesian Neural Networks [37.69303106863453]
We introduce an approach for uncertainty quantification (UQ) within physics-informed BNNs.
We present case studies for predicting the creep rupture life of steel alloys.
The most promising framework for creep life prediction is BNNs based on Markov Chain Monte Carlo approximation of the posterior distribution of network parameters.
arXiv Detail & Related papers (2023-11-04T19:40:16Z) - Practical Probabilistic Model-based Deep Reinforcement Learning by
Integrating Dropout Uncertainty and Trajectory Sampling [7.179313063022576]
This paper addresses the prediction stability, prediction accuracy and control capability of the current probabilistic model-based reinforcement learning (MBRL) built on neural networks.
A novel approach dropout-based probabilistic ensembles with trajectory sampling (DPETS) is proposed.
arXiv Detail & Related papers (2023-09-20T06:39:19Z) - Deep Learning Based Residuals in Non-linear Factor Models: Precision
Matrix Estimation of Returns with Low Signal-to-Noise Ratio [0.0]
This paper introduces a consistent estimator and rate of convergence for the precision matrix of asset returns in large portfolios.
Our estimator remains valid even in low signal-to-noise ratio environments typical for financial markets.
arXiv Detail & Related papers (2022-09-09T20:29:54Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Neural Networks and Value at Risk [59.85784504799224]
We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
arXiv Detail & Related papers (2020-05-04T17:41:59Z) - Recurrent neural networks and Koopman-based frameworks for temporal
predictions in a low-order model of turbulence [1.95992742032823]
We show that it is possible to obtain excellent reproductions of the long-term statistics of a chaotic system with properly trained long-short-term memory networks.
A Koopman-based framework, called Koopman with nonlinear forcing (KNF), leads to the same level of accuracy in the statistics at a significantly lower computational expense.
arXiv Detail & Related papers (2020-05-01T11:05:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.