Adaptive learning for financial markets mixing model-based and
model-free RL for volatility targeting
- URL: http://arxiv.org/abs/2104.10483v2
- Date: Thu, 22 Apr 2021 09:15:21 GMT
- Title: Adaptive learning for financial markets mixing model-based and
model-free RL for volatility targeting
- Authors: Eric Benhamou and David Saltiel and Serge Tabachnik and Sui Kai Wong
and Fran\c{c}ois Chareyron
- Abstract summary: Model-Free Reinforcement Learning has achieved meaningful results in stable environments but, to this day, it remains problematic in regime changing environments like financial markets.
We propose to combine the best of the two techniques by selecting various model-based approaches thanks to Model-Free Deep Reinforcement Learning.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model-Free Reinforcement Learning has achieved meaningful results in stable
environments but, to this day, it remains problematic in regime changing
environments like financial markets. In contrast, model-based RL is able to
capture some fundamental and dynamical concepts of the environment but suffer
from cognitive bias. In this work, we propose to combine the best of the two
techniques by selecting various model-based approaches thanks to Model-Free
Deep Reinforcement Learning. Using not only past performance and volatility, we
include additional contextual information such as macro and risk appetite
signals to account for implicit regime changes. We also adapt traditional RL
methods to real-life situations by considering only past data for the training
sets. Hence, we cannot use future information in our training data set as
implied by K-fold cross validation. Building on traditional statistical
methods, we use the traditional "walk-forward analysis", which is defined by
successive training and testing based on expanding periods, to assert the
robustness of the resulting agent.
Finally, we present the concept of statistical difference's significance
based on a two-tailed T-test, to highlight the ways in which our models differ
from more traditional ones. Our experimental results show that our approach
outperforms traditional financial baseline portfolio models such as the
Markowitz model in almost all evaluation metrics commonly used in financial
mathematics, namely net performance, Sharpe and Sortino ratios, maximum
drawdown, maximum drawdown over volatility.
Related papers
- Predictive Churn with the Set of Good Models [64.05949860750235]
We study the effect of conflicting predictions over the set of near-optimal machine learning models.
We present theoretical results on the expected churn between models within the Rashomon set.
We show how our approach can be used to better anticipate, reduce, and avoid churn in consumer-facing applications.
arXiv Detail & Related papers (2024-02-12T16:15:25Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - Improving Sample Efficiency of Deep Learning Models in Electricity
Market [0.41998444721319217]
We propose a general framework, namely Knowledge-Augmented Training (KAT), to improve the sample efficiency.
We propose a novel data augmentation technique to generate some synthetic data, which are later processed by an improved training strategy.
Modern learning theories demonstrate the effectiveness of our method in terms of effective prediction error feedbacks, a reliable loss function, and rich gradient noises.
arXiv Detail & Related papers (2022-10-11T16:35:13Z) - Bayesian Bilinear Neural Network for Predicting the Mid-price Dynamics
in Limit-Order Book Markets [84.90242084523565]
Traditional time-series econometric methods often appear incapable of capturing the true complexity of the multi-level interactions driving the price dynamics.
By adopting a state-of-the-art second-order optimization algorithm, we train a Bayesian bilinear neural network with temporal attention.
By addressing the use of predictive distributions to analyze errors and uncertainties associated with the estimated parameters and model forecasts, we thoroughly compare our Bayesian model with traditional ML alternatives.
arXiv Detail & Related papers (2022-03-07T18:59:54Z) - Forex Trading Volatility Prediction using Neural Network Models [6.09960572440709]
We show how to construct the deep-learning network by the guidance of the empirical patterns of the intra-day volatility.
The numerical results show that the multiscale Long Short-Term Memory (LSTM) model with the input of multi-currency pairs consistently achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-12-02T12:33:12Z) - Adaptive Learning on Time Series: Method and Financial Applications [0.0]
We use Adaptive Learning to forecast S&P 500 returns across multiple forecast horizons.
We find that Adaptive Learning models are on par with, if not better than, the best of the parametric models a posteriori.
We present a financial application of the learning results and an interpretation of the learning regime during the 2020 market crash.
arXiv Detail & Related papers (2021-10-21T13:59:54Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - DoubleEnsemble: A New Ensemble Method Based on Sample Reweighting and
Feature Selection for Financial Data Analysis [22.035287788330663]
We propose DoubleEnsemble, an ensemble framework leveraging learning trajectory based sample reweighting and shuffling based feature selection.
Our model is applicable to a wide range of base models, capable of extracting complex patterns, while mitigating the overfitting and instability issues for financial market prediction.
arXiv Detail & Related papers (2020-10-03T02:57:10Z) - Model Embedding Model-Based Reinforcement Learning [4.566180616886624]
Model-based reinforcement learning (MBRL) has shown its advantages in sample-efficiency over model-free reinforcement learning (MFRL)
Despite the impressive results it achieves, it still faces a trade-off between the ease of data generation and model bias.
We propose a simple and elegant model-embedding model-based reinforcement learning (MEMB) algorithm in the framework of the probabilistic reinforcement learning.
arXiv Detail & Related papers (2020-06-16T15:10:28Z) - Model-Augmented Actor-Critic: Backpropagating through Paths [81.86992776864729]
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator.
We show how to make more effective use of the model by exploiting its differentiability.
arXiv Detail & Related papers (2020-05-16T19:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.