Generating drawdown-realistic financial price paths using path
signatures
- URL: http://arxiv.org/abs/2309.04507v1
- Date: Fri, 8 Sep 2023 10:06:40 GMT
- Title: Generating drawdown-realistic financial price paths using path
signatures
- Authors: Emiel Lemahieu, Kris Boudt, Maarten Wyns
- Abstract summary: We introduce a novel generative machine learning approach for the simulation of sequences of financial price data with drawdowns quantifiably close to empirical data.
We advocate a non-trivial Monte Carlo approach combining a variational autoencoder generative model with a drawdown reconstruction loss function.
We conclude with numerical experiments on mixed equity, bond, real estate and commodity portfolios and obtain a host of drawdown-realistic paths.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A novel generative machine learning approach for the simulation of sequences
of financial price data with drawdowns quantifiably close to empirical data is
introduced. Applications such as pricing drawdown insurance options or
developing portfolio drawdown control strategies call for a host of
drawdown-realistic paths. Historical scenarios may be insufficient to
effectively train and backtest the strategy, while standard parametric Monte
Carlo does not adequately preserve drawdowns. We advocate a non-parametric
Monte Carlo approach combining a variational autoencoder generative model with
a drawdown reconstruction loss function. To overcome issues of numerical
complexity and non-differentiability, we approximate drawdown as a linear
function of the moments of the path, known in the literature as path
signatures. We prove the required regularity of drawdown function and
consistency of the approximation. Furthermore, we obtain close numerical
approximations using linear regression for fractional Brownian and empirical
data. We argue that linear combinations of the moments of a path yield a
mathematically non-trivial smoothing of the drawdown function, which gives one
leeway to simulate drawdown-realistic price paths by including drawdown
evaluation metrics in the learning objective. We conclude with numerical
experiments on mixed equity, bond, real estate and commodity portfolios and
obtain a host of drawdown-realistic paths.
Related papers
- Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Contextual Linear Optimization with Bandit Feedback [35.692428244561626]
We study a class of algorithms for Contextual linear optimization (CLO) with bandit feedback.
We show a fast-rate regret bound for IERM that allows for misspecified model classes and flexible choices of the optimization estimate.
A byproduct of our theory of independent interest is fast-rate regret bound for IERM with full feedback and misspecified policy class.
arXiv Detail & Related papers (2024-05-26T13:27:27Z) - Randomized Signature Methods in Optimal Portfolio Selection [2.6490401904186758]
We present convincing empirical results on the application of Randomized Signature Methods for non-linear, non-parametric drift estimation.
We do not contribute to the theory of Randomized Signatures here, but rather present our empirical findings on portfolio selection in real world settings including real market data and transaction costs.
arXiv Detail & Related papers (2023-12-27T07:27:00Z) - An Offline Learning Approach to Propagator Models [3.1755820123640612]
We consider an offline learning problem for an agent who first estimates an unknown price impact kernel from a static dataset.
We propose a novel approach for a nonparametric estimation of the propagator from a dataset containing correlated price trajectories, trading signals and metaorders.
We show that a trader who tries to minimise her execution costs by using a greedy strategy purely based on the estimated propagator will encounter suboptimality.
arXiv Detail & Related papers (2023-09-06T13:36:43Z) - Reinforcement Learning for Financial Index Tracking [0.4297070083645049]
We propose the first discrete-time infinite-horizon dynamic formulation of the financial index tracking problem under both return-based tracking error and value-based tracking error.
The proposed method outperforms a benchmark method in terms of tracking accuracy and has the potential for earning extra profit through cash withdraw strategy.
arXiv Detail & Related papers (2023-08-05T08:34:52Z) - A Tale of Sampling and Estimation in Discounted Reinforcement Learning [50.43256303670011]
We present a minimax lower bound on the discounted mean estimation problem.
We show that estimating the mean by directly sampling from the discounted kernel of the Markov process brings compelling statistical properties.
arXiv Detail & Related papers (2023-04-11T09:13:17Z) - Statistical Learning with Sublinear Regret of Propagator Models [2.9628715114493502]
We consider a class of learning problems in which an agent liquidates a risky asset while creating both transient impact price driven by an unknown convolution propagator and linear temporary impact price with an unknown parameter.
We present a trading algorithm that alternates between exploration and exploitation and sublinear sublinear regrets with high probability.
arXiv Detail & Related papers (2023-01-12T17:16:27Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Minimax-Optimal Off-Policy Evaluation with Linear Function Approximation [49.502277468627035]
This paper studies the statistical theory of batch data reinforcement learning with function approximation.
Consider the off-policy evaluation problem, which is to estimate the cumulative value of a new target policy from logged history.
arXiv Detail & Related papers (2020-02-21T19:20:57Z) - Confounding-Robust Policy Evaluation in Infinite-Horizon Reinforcement
Learning [70.01650994156797]
Off- evaluation of sequential decision policies from observational data is necessary in batch reinforcement learning such as education healthcare.
We develop an approach that estimates the bounds of a given policy.
We prove convergence to the sharp bounds as we collect more confounded data.
arXiv Detail & Related papers (2020-02-11T16:18:14Z) - Adaptive Correlated Monte Carlo for Contextual Categorical Sequence
Generation [77.7420231319632]
We adapt contextual generation of categorical sequences to a policy gradient estimator, which evaluates a set of correlated Monte Carlo (MC) rollouts for variance control.
We also demonstrate the use of correlated MC rollouts for binary-tree softmax models, which reduce the high generation cost in large vocabulary scenarios.
arXiv Detail & Related papers (2019-12-31T03:01:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.