Adaptive learning for financial markets mixing model-based and
model-free RL for volatility targeting
- URL: http://arxiv.org/abs/2104.10483v2
- Date: Thu, 22 Apr 2021 09:15:21 GMT
- Title: Adaptive learning for financial markets mixing model-based and
model-free RL for volatility targeting
- Authors: Eric Benhamou and David Saltiel and Serge Tabachnik and Sui Kai Wong
and Fran\c{c}ois Chareyron
- Abstract summary: Model-Free Reinforcement Learning has achieved meaningful results in stable environments but, to this day, it remains problematic in regime changing environments like financial markets.
We propose to combine the best of the two techniques by selecting various model-based approaches thanks to Model-Free Deep Reinforcement Learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model-Free Reinforcement Learning has achieved meaningful results in stable
environments but, to this day, it remains problematic in regime changing
environments like financial markets. In contrast, model-based RL is able to
capture some fundamental and dynamical concepts of the environment but suffer
from cognitive bias. In this work, we propose to combine the best of the two
techniques by selecting various model-based approaches thanks to Model-Free
Deep Reinforcement Learning. Using not only past performance and volatility, we
include additional contextual information such as macro and risk appetite
signals to account for implicit regime changes. We also adapt traditional RL
methods to real-life situations by considering only past data for the training
sets. Hence, we cannot use future information in our training data set as
implied by K-fold cross validation. Building on traditional statistical
methods, we use the traditional "walk-forward analysis", which is defined by
successive training and testing based on expanding periods, to assert the
robustness of the resulting agent.
Finally, we present the concept of statistical difference's significance
based on a two-tailed T-test, to highlight the ways in which our models differ
from more traditional ones. Our experimental results show that our approach
outperforms traditional financial baseline portfolio models such as the
Markowitz model in almost all evaluation metrics commonly used in financial
mathematics, namely net performance, Sharpe and Sortino ratios, maximum
drawdown, maximum drawdown over volatility.
Related papers
- MITA: Bridging the Gap between Model and Data for Test-time Adaptation [68.62509948690698]
Test-Time Adaptation (TTA) has emerged as a promising paradigm for enhancing the generalizability of models.
We propose Meet-In-The-Middle based MITA, which introduces energy-based optimization to encourage mutual adaptation of the model and data from opposing directions.
arXiv Detail & Related papers (2024-10-12T07:02:33Z) - Stock Price Prediction and Traditional Models: An Approach to Achieve Short-, Medium- and Long-Term Goals [0.0]
A comparative analysis of deep learning models and traditional statistical methods for stock price prediction uses data from the Nigerian stock exchange.
Deep learning models, particularly LSTM, outperform traditional methods by capturing complex, nonlinear patterns in the data.
The findings highlight the potential of deep learning for improving financial forecasting and investment strategies.
arXiv Detail & Related papers (2024-09-29T11:20:20Z) - Enhancing Fairness and Performance in Machine Learning Models: A Multi-Task Learning Approach with Monte-Carlo Dropout and Pareto Optimality [1.5498930424110338]
This study introduces an approach to mitigate bias in machine learning by leveraging model uncertainty.
Our approach utilizes a multi-task learning (MTL) framework combined with Monte Carlo (MC) Dropout to assess and mitigate uncertainty in predictions related to protected labels.
arXiv Detail & Related papers (2024-04-12T04:17:50Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - Surrogate uncertainty estimation for your time series forecasting black-box: learn when to trust [2.0393477576774752]
Our research introduces a method for uncertainty estimation.
It enhances any base regression model with reasonable uncertainty estimates.
Using various time-series forecasting data, we found that our surrogate model-based technique delivers significantly more accurate confidence intervals.
arXiv Detail & Related papers (2023-02-06T14:52:56Z) - Bayesian Bilinear Neural Network for Predicting the Mid-price Dynamics
in Limit-Order Book Markets [84.90242084523565]
Traditional time-series econometric methods often appear incapable of capturing the true complexity of the multi-level interactions driving the price dynamics.
By adopting a state-of-the-art second-order optimization algorithm, we train a Bayesian bilinear neural network with temporal attention.
By addressing the use of predictive distributions to analyze errors and uncertainties associated with the estimated parameters and model forecasts, we thoroughly compare our Bayesian model with traditional ML alternatives.
arXiv Detail & Related papers (2022-03-07T18:59:54Z) - Forex Trading Volatility Prediction using Neural Network Models [6.09960572440709]
We show how to construct the deep-learning network by the guidance of the empirical patterns of the intra-day volatility.
The numerical results show that the multiscale Long Short-Term Memory (LSTM) model with the input of multi-currency pairs consistently achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-12-02T12:33:12Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - Model Embedding Model-Based Reinforcement Learning [4.566180616886624]
Model-based reinforcement learning (MBRL) has shown its advantages in sample-efficiency over model-free reinforcement learning (MFRL)
Despite the impressive results it achieves, it still faces a trade-off between the ease of data generation and model bias.
We propose a simple and elegant model-embedding model-based reinforcement learning (MEMB) algorithm in the framework of the probabilistic reinforcement learning.
arXiv Detail & Related papers (2020-06-16T15:10:28Z) - Model-Augmented Actor-Critic: Backpropagating through Paths [81.86992776864729]
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator.
We show how to make more effective use of the model by exploiting its differentiability.
arXiv Detail & Related papers (2020-05-16T19:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.