Optimal Linear Signal: An Unsupervised Machine Learning Framework to
Optimize PnL with Linear Signals
- URL: http://arxiv.org/abs/2401.05337v1
- Date: Wed, 22 Nov 2023 21:10:59 GMT
- Title: Optimal Linear Signal: An Unsupervised Machine Learning Framework to
Optimize PnL with Linear Signals
- Authors: Pierre Renucci
- Abstract summary: This study presents an unsupervised machine learning approach for optimizing Profit and Loss (PnL) in quantitative finance.
Our algorithm, akin to an unsupervised variant of linear regression, maximizes the Sharpe Ratio of PnL generated from signals constructed linearly from external variables.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This study presents an unsupervised machine learning approach for optimizing
Profit and Loss (PnL) in quantitative finance. Our algorithm, akin to an
unsupervised variant of linear regression, maximizes the Sharpe Ratio of PnL
generated from signals constructed linearly from exogenous variables. The
methodology employs a linear relationship between exogenous variables and the
trading signal, with the objective of maximizing the Sharpe Ratio through
parameter optimization. Empirical application on an ETF representing U.S.
Treasury bonds demonstrates the model's effectiveness, supported by
regularization techniques to mitigate overfitting. The study concludes with
potential avenues for further development, including generalized time steps and
enhanced corrective terms.
Related papers
- Adaptive Federated Learning Over the Air [108.62635460744109]
We propose a federated version of adaptive gradient methods, particularly AdaGrad and Adam, within the framework of over-the-air model training.
Our analysis shows that the AdaGrad-based training algorithm converges to a stationary point at the rate of $mathcalO( ln(T) / T 1 - frac1alpha ).
arXiv Detail & Related papers (2024-03-11T09:10:37Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - On Training Implicit Meta-Learning With Applications to Inductive
Weighing in Consistency Regularization [0.0]
Implicit meta-learning (IML) requires computing $2nd$ order gradients, particularly the Hessian.
Various approximations for the Hessian were proposed but a systematic comparison of their compute cost, stability, generalization of solution found and estimation accuracy were largely overlooked.
We show how training a "Confidence Network" to extract domain specific features can learn to up-weigh useful images and down-weigh out-of-distribution samples.
arXiv Detail & Related papers (2023-10-28T15:50:03Z) - Value-Biased Maximum Likelihood Estimation for Model-based Reinforcement
Learning in Discounted Linear MDPs [16.006893624836554]
We propose to solve linear MDPs through the lens of Value-Biased Maximum Likelihood Estimation (VBMLE)
VBMLE is computationally more efficient as it only requires solving one optimization problem in each time step.
In our regret analysis, we offer a generic convergence result of MLE in linear MDPs through a novel supermartingale construct.
arXiv Detail & Related papers (2023-10-17T18:27:27Z) - Pessimistic Nonlinear Least-Squares Value Iteration for Offline
Reinforcement Learning [58.962016644796]
We propose an oracle-efficient algorithm, dubbed Pessimistic Least-Square Value Iteration (PNLSVI) for offline RL with non-linear function approximation.
Our algorithm enjoys a regret bound that has a tight dependency on the function class complexity and achieves minimax optimal instance-dependent regret when specialized to linear function approximation.
arXiv Detail & Related papers (2023-10-02T17:42:01Z) - Nonparametric Linear Feature Learning in Regression Through
Regularisation [0.0]
This study focuses on supervised learning scenarios where the information resides within a lower-dimensional linear subspace of the data.
We propose a novel method for linear feature learning with non-parametric prediction, which simultaneously estimates the linear subspace.
Our approach employs empirical risk minimisation augmented with a penalty on derivatives, ensuring versatility.
arXiv Detail & Related papers (2023-07-24T12:52:55Z) - The Power of Learned Locally Linear Models for Nonlinear Policy
Optimization [26.45568696453259]
This paper conducts a rigorous analysis of a simplified variant of this strategy for general nonlinear systems.
We analyze an algorithm which iterates between estimating local linear models of nonlinear system dynamics and performing $mathttiLQR$-like policy updates.
arXiv Detail & Related papers (2023-05-16T17:13:00Z) - Near-optimal Offline Reinforcement Learning with Linear Representation:
Leveraging Variance Information with Pessimism [65.46524775457928]
offline reinforcement learning seeks to utilize offline/historical data to optimize sequential decision-making strategies.
We study the statistical limits of offline reinforcement learning with linear model representations.
arXiv Detail & Related papers (2022-03-11T09:00:12Z) - Learning to Refit for Convex Learning Problems [11.464758257681197]
We propose a framework to learn to estimate optimized model parameters for different training sets using neural networks.
We rigorously characterize the power of neural networks to approximate convex problems.
arXiv Detail & Related papers (2021-11-24T15:28:50Z) - Solving Multistage Stochastic Linear Programming via Regularized Linear
Decision Rules: An Application to Hydrothermal Dispatch Planning [77.34726150561087]
We propose a novel regularization scheme for linear decision rules (LDR) based on the AdaSO (adaptive least absolute shrinkage and selection operator)
Experiments show that the overfit threat is non-negligible when using the classical non-regularized LDR to solve MSLP.
For the LHDP problem, our analysis highlights the following benefits of the proposed framework in comparison to the non-regularized benchmark.
arXiv Detail & Related papers (2021-10-07T02:36:14Z) - LQF: Linear Quadratic Fine-Tuning [114.3840147070712]
We present the first method for linearizing a pre-trained model that achieves comparable performance to non-linear fine-tuning.
LQF consists of simple modifications to the architecture, loss function and optimization typically used for classification.
arXiv Detail & Related papers (2020-12-21T06:40:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.