Randomized Signature Methods in Optimal Portfolio Selection
- URL: http://arxiv.org/abs/2312.16448v1
- Date: Wed, 27 Dec 2023 07:27:00 GMT
- Title: Randomized Signature Methods in Optimal Portfolio Selection
- Authors: Erdinc Akyildirim, Matteo Gambara, Josef Teichmann, Syang Zhou
- Abstract summary: We present convincing empirical results on the application of Randomized Signature Methods for non-linear, non-parametric drift estimation.
We do not contribute to the theory of Randomized Signatures here, but rather present our empirical findings on portfolio selection in real world settings including real market data and transaction costs.
- Score: 2.6490401904186758
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present convincing empirical results on the application of Randomized
Signature Methods for non-linear, non-parametric drift estimation for a
multi-variate financial market. Even though drift estimation is notoriously ill
defined due to small signal to noise ratio, one can still try to learn optimal
non-linear maps from data to future returns for the purposes of portfolio
optimization. Randomized Signatures, in contrast to classical signatures, allow
for high dimensional market dimension and provide features on the same scale.
We do not contribute to the theory of Randomized Signatures here, but rather
present our empirical findings on portfolio selection in real world settings
including real market data and transaction costs.
Related papers
- Mean-Variance Portfolio Selection in Long-Term Investments with Unknown Distribution: Online Estimation, Risk Aversion under Ambiguity, and Universality of Algorithms [0.0]
This paper adopts a perspective where data gradually and continuously reveal over time.
The performance of these proposed strategies is guaranteed under specific markets.
In stationary and ergodic markets, the so-called Bayesian strategy utilizing true conditional distributions, based on observed past market information during investment, almost surely does not perform better than the proposed strategies in terms of empirical utility, Sharpe ratio, or growth rate, which, in contrast, do not rely on conditional distributions.
arXiv Detail & Related papers (2024-06-19T12:11:42Z) - Distribution-Free Predictive Inference under Unknown Temporal Drift [1.024113475677323]
We propose a strategy for choosing an adaptive window and use the data therein to construct prediction sets.
We provide sharp coverage guarantees for our method, showing its adaptivity to the underlying temporal drift.
arXiv Detail & Related papers (2024-06-10T17:55:43Z) - Rejection via Learning Density Ratios [50.91522897152437]
Classification with rejection emerges as a learning paradigm which allows models to abstain from making predictions.
We propose a different distributional perspective, where we seek to find an idealized data distribution which maximizes a pretrained model's performance.
Our framework is tested empirically over clean and noisy datasets.
arXiv Detail & Related papers (2024-05-29T01:32:17Z) - Uncertainty for Active Learning on Graphs [70.44714133412592]
Uncertainty Sampling is an Active Learning strategy that aims to improve the data efficiency of machine learning models.
We benchmark Uncertainty Sampling beyond predictive uncertainty and highlight a significant performance gap to other Active Learning strategies.
We develop ground-truth Bayesian uncertainty estimates in terms of the data generating process and prove their effectiveness in guiding Uncertainty Sampling toward optimal queries.
arXiv Detail & Related papers (2024-05-02T16:50:47Z) - Nonparametric Bellman Mappings for Reinforcement Learning: Application to Robust Adaptive Filtering [3.730504020733928]
This paper designs novel nonparametric Bellman mappings in reproducing kernel Hilbert spaces (RKHSs) for reinforcement learning (RL)
The proposed mappings benefit from the rich approximating properties of RKHSs, adopt no assumptions on the statistics of the data owing to their nonparametric nature, and may operate without any training data.
As an application, the proposed mappings are employed to offer a novel solution to the problem of countering outliers in adaptive filtering.
arXiv Detail & Related papers (2024-03-29T07:15:30Z) - Generating drawdown-realistic financial price paths using path
signatures [0.0]
We introduce a novel generative machine learning approach for the simulation of sequences of financial price data with drawdowns quantifiably close to empirical data.
We advocate a non-trivial Monte Carlo approach combining a variational autoencoder generative model with a drawdown reconstruction loss function.
We conclude with numerical experiments on mixed equity, bond, real estate and commodity portfolios and obtain a host of drawdown-realistic paths.
arXiv Detail & Related papers (2023-09-08T10:06:40Z) - Uncertainty-Aware Instance Reweighting for Off-Policy Learning [63.31923483172859]
We propose a Uncertainty-aware Inverse Propensity Score estimator (UIPS) for improved off-policy learning.
Experiment results on synthetic and three real-world recommendation datasets demonstrate the advantageous sample efficiency of the proposed UIPS estimator.
arXiv Detail & Related papers (2023-03-11T11:42:26Z) - Estimating Regression Predictive Distributions with Sample Networks [17.935136717050543]
A common approach to model uncertainty is to choose a parametric distribution and fit the data to it using maximum likelihood estimation.
The chosen parametric form can be a poor fit to the data-generating distribution, resulting in unreliable uncertainty estimates.
We propose SampleNet, a flexible and scalable architecture for modeling uncertainty that avoids specifying a parametric form on the output distribution.
arXiv Detail & Related papers (2022-11-24T17:23:29Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Stochastic-Sign SGD for Federated Learning with Theoretical Guarantees [49.91477656517431]
Quantization-based solvers have been widely adopted in Federated Learning (FL)
No existing methods enjoy all the aforementioned properties.
We propose an intuitively-simple yet theoretically-simple method based on SIGNSGD to bridge the gap.
arXiv Detail & Related papers (2020-02-25T15:12:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.