Generating drawdown-realistic financial price paths using path
signatures
- URL: http://arxiv.org/abs/2309.04507v1
- Date: Fri, 8 Sep 2023 10:06:40 GMT
- Title: Generating drawdown-realistic financial price paths using path
signatures
- Authors: Emiel Lemahieu, Kris Boudt, Maarten Wyns
- Abstract summary: We introduce a novel generative machine learning approach for the simulation of sequences of financial price data with drawdowns quantifiably close to empirical data.
We advocate a non-trivial Monte Carlo approach combining a variational autoencoder generative model with a drawdown reconstruction loss function.
We conclude with numerical experiments on mixed equity, bond, real estate and commodity portfolios and obtain a host of drawdown-realistic paths.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A novel generative machine learning approach for the simulation of sequences
of financial price data with drawdowns quantifiably close to empirical data is
introduced. Applications such as pricing drawdown insurance options or
developing portfolio drawdown control strategies call for a host of
drawdown-realistic paths. Historical scenarios may be insufficient to
effectively train and backtest the strategy, while standard parametric Monte
Carlo does not adequately preserve drawdowns. We advocate a non-parametric
Monte Carlo approach combining a variational autoencoder generative model with
a drawdown reconstruction loss function. To overcome issues of numerical
complexity and non-differentiability, we approximate drawdown as a linear
function of the moments of the path, known in the literature as path
signatures. We prove the required regularity of drawdown function and
consistency of the approximation. Furthermore, we obtain close numerical
approximations using linear regression for fractional Brownian and empirical
data. We argue that linear combinations of the moments of a path yield a
mathematically non-trivial smoothing of the drawdown function, which gives one
leeway to simulate drawdown-realistic price paths by including drawdown
evaluation metrics in the learning objective. We conclude with numerical
experiments on mixed equity, bond, real estate and commodity portfolios and
obtain a host of drawdown-realistic paths.
Related papers
- Truncating Trajectories in Monte Carlo Policy Evaluation: an Adaptive Approach [51.76826149868971]
Policy evaluation via Monte Carlo simulation is at the core of many MC Reinforcement Learning (RL) algorithms.
We propose as a quality index a surrogate of the mean squared error of a return estimator that uses trajectories of different lengths.
We present an adaptive algorithm called Robust and Iterative Data collection strategy Optimization (RIDO)
arXiv Detail & Related papers (2024-10-17T11:47:56Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Contextual Linear Optimization with Bandit Feedback [35.692428244561626]
Contextual linear optimization (CLO) uses predictive contextual features to reduce uncertainty in random cost coefficients.
We study a class of offline learning algorithms for CLO with bandit feedback.
We show a fast-rate regret bound for IERM that allows for misspecified model classes and flexible choices of the optimization estimate.
arXiv Detail & Related papers (2024-05-26T13:27:27Z) - An Offline Learning Approach to Propagator Models [3.1755820123640612]
We consider an offline learning problem for an agent who first estimates an unknown price impact kernel from a static dataset.
We propose a novel approach for a nonparametric estimation of the propagator from a dataset containing correlated price trajectories, trading signals and metaorders.
We show that a trader who tries to minimise her execution costs by using a greedy strategy purely based on the estimated propagator will encounter suboptimality.
arXiv Detail & Related papers (2023-09-06T13:36:43Z) - Reinforcement Learning for Financial Index Tracking [0.4297070083645049]
We propose the first discrete-time infinite-horizon dynamic formulation of the financial index tracking problem under both return-based tracking error and value-based tracking error.
The proposed method outperforms a benchmark method in terms of tracking accuracy and has the potential for earning extra profit through cash withdraw strategy.
arXiv Detail & Related papers (2023-08-05T08:34:52Z) - A Tale of Sampling and Estimation in Discounted Reinforcement Learning [50.43256303670011]
We present a minimax lower bound on the discounted mean estimation problem.
We show that estimating the mean by directly sampling from the discounted kernel of the Markov process brings compelling statistical properties.
arXiv Detail & Related papers (2023-04-11T09:13:17Z) - Statistical Learning with Sublinear Regret of Propagator Models [2.9628715114493502]
We consider a class of learning problems in which an agent liquidates a risky asset while creating both transient impact price driven by an unknown convolution propagator and linear temporary impact price with an unknown parameter.
We present a trading algorithm that alternates between exploration and exploitation and sublinear sublinear regrets with high probability.
arXiv Detail & Related papers (2023-01-12T17:16:27Z) - Minimax-Optimal Off-Policy Evaluation with Linear Function Approximation [49.502277468627035]
This paper studies the statistical theory of batch data reinforcement learning with function approximation.
Consider the off-policy evaluation problem, which is to estimate the cumulative value of a new target policy from logged history.
arXiv Detail & Related papers (2020-02-21T19:20:57Z) - Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement
Learning [70.01650994156797]
Off- evaluation of sequential decision policies from observational data is necessary in batch reinforcement learning such as education healthcare.
We develop an approach that estimates the bounds of a given policy.
We prove convergence to the sharp bounds as we collect more confounded data.
arXiv Detail & Related papers (2020-02-11T16:18:14Z) - Adaptive Correlated Monte Carlo for Contextual Categorical Sequence
Generation [77.7420231319632]
We adapt contextual generation of categorical sequences to a policy gradient estimator, which evaluates a set of correlated Monte Carlo (MC) rollouts for variance control.
We also demonstrate the use of correlated MC rollouts for binary-tree softmax models, which reduce the high generation cost in large vocabulary scenarios.
arXiv Detail & Related papers (2019-12-31T03:01:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.