Deep Learning for Options Trading: An End-To-End Approach
- URL: http://arxiv.org/abs/2407.21791v1
- Date: Wed, 31 Jul 2024 17:59:09 GMT
- Title: Deep Learning for Options Trading: An End-To-End Approach
- Authors: Wee Ling Tan, Stephen Roberts, Stefan Zohren,
- Abstract summary: We introduce a novel approach to options trading strategies using a highly scalable and data-driven machine learning algorithm.
We demonstrate that deep learning models trained according to our end-to-end approach exhibit significant improvements in risk-adjusted performance over existing rules-based trading strategies.
- Score: 7.148312060227716
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel approach to options trading strategies using a highly scalable and data-driven machine learning algorithm. In contrast to traditional approaches that often require specifications of underlying market dynamics or assumptions on an option pricing model, our models depart fundamentally from the need for these prerequisites, directly learning non-trivial mappings from market data to optimal trading signals. Backtesting on more than a decade of option contracts for equities listed on the S&P 100, we demonstrate that deep learning models trained according to our end-to-end approach exhibit significant improvements in risk-adjusted performance over existing rules-based trading strategies. We find that incorporating turnover regularization into the models leads to further performance enhancements at prohibitively high levels of transaction costs.
Related papers
- Your Offline Policy is Not Trustworthy: Bilevel Reinforcement Learning for Sequential Portfolio Optimization [82.03139922490796]
Reinforcement learning (RL) has shown significant promise for sequential portfolio optimization tasks, such as stock trading, where the objective is to maximize cumulative returns while minimizing risks using historical data.<n>Traditional RL approaches often produce policies that merely memorize the optimal yet impractical buying and selling behaviors within the fixed dataset.<n>Our approach frames portfolio optimization as a new type of partial-offline RL problem and makes two technical contributions.
arXiv Detail & Related papers (2025-05-19T06:37:25Z) - Model Steering: Learning with a Reference Model Improves Generalization Bounds and Scaling Laws [52.10468229008941]
This paper formalizes an emerging learning paradigm that uses a trained model as a reference to guide and enhance the training of a target model through strategic data selection or weighting.<n>We provide theoretical insights into why this approach improves generalization and data efficiency compared to training without a reference model.<n>Building on these insights, we introduce a novel method for Contrastive Language-Image Pretraining with a reference model, termed DRRho-CLIP.
arXiv Detail & Related papers (2025-05-10T16:55:03Z) - Trading Under Uncertainty: A Distribution-Based Strategy for Futures Markets Using FutureQuant Transformer [0.13107174618549586]
We introduce the FutureQuant Transformer model, leveraging attention mechanisms to navigate these challenges.<n>Unlike conventional models focused on point predictions, the FutureQuant model excels in forecasting the range and volatility of future prices.<n>Its ability to parse and learn from intricate market patterns allows for enhanced decision-making.
arXiv Detail & Related papers (2025-05-08T18:52:04Z) - From Demonstrations to Rewards: Alignment Without Explicit Human Preferences [55.988923803469305]
In this paper, we propose a fresh perspective on learning alignment based on inverse reinforcement learning principles.
Instead of relying on large preference data, we directly learn the reward model from demonstration data.
arXiv Detail & Related papers (2025-03-15T20:53:46Z) - DeepClair: Utilizing Market Forecasts for Effective Portfolio Selection [29.43115584494825]
We introduce DeepClair, a novel framework for portfolio selection.
DeepClair leverages a transformer-based time-series forecasting model to predict market trends.
arXiv Detail & Related papers (2024-07-18T11:51:03Z) - Cost-Effective Proxy Reward Model Construction with On-Policy and Active Learning [70.22819290458581]
Reinforcement learning with human feedback (RLHF) is a widely adopted approach in current large language model pipelines.
Our approach introduces two key innovations: (1) on-policy query to avoid OOD and imbalance issues in seed data, and (2) active learning to select the most informative data for preference queries.
arXiv Detail & Related papers (2024-07-02T10:09:19Z) - F-FOMAML: GNN-Enhanced Meta-Learning for Peak Period Demand Forecasting with Proxy Data [65.6499834212641]
We formulate the demand prediction as a meta-learning problem and develop the Feature-based First-Order Model-Agnostic Meta-Learning (F-FOMAML) algorithm.
By considering domain similarities through task-specific metadata, our model improved generalization, where the excess risk decreases as the number of training tasks increases.
Compared to existing state-of-the-art models, our method demonstrates a notable improvement in demand prediction accuracy, reducing the Mean Absolute Error by 26.24% on an internal vending machine dataset and by 1.04% on the publicly accessible JD.com dataset.
arXiv Detail & Related papers (2024-06-23T21:28:50Z) - An Ensemble Method of Deep Reinforcement Learning for Automated
Cryptocurrency Trading [16.78239969166596]
We propose an ensemble method to improve the generalization performance of trading strategies trained by deep reinforcement learning algorithms.
Our proposed ensemble method improves the out-of-sample performance compared with the benchmarks of a deep reinforcement learning strategy and a passive investment strategy.
arXiv Detail & Related papers (2023-07-27T04:00:09Z) - Data Cross-Segmentation for Improved Generalization in Reinforcement
Learning Based Algorithmic Trading [5.75899596101548]
We propose a Reinforcement Learning (RL) algorithm that trades based on signals from a learned predictive model.
We test our algorithm on 20+ years of equity data from Bursa Malaysia.
arXiv Detail & Related papers (2023-07-18T16:00:02Z) - Augmented Bilinear Network for Incremental Multi-Stock Time-Series
Classification [83.23129279407271]
We propose a method to efficiently retain the knowledge available in a neural network pre-trained on a set of securities.
In our method, the prior knowledge encoded in a pre-trained neural network is maintained by keeping existing connections fixed.
This knowledge is adjusted for the new securities by a set of augmented connections, which are optimized using the new data.
arXiv Detail & Related papers (2022-07-23T18:54:10Z) - Bayesian Bilinear Neural Network for Predicting the Mid-price Dynamics
in Limit-Order Book Markets [84.90242084523565]
Traditional time-series econometric methods often appear incapable of capturing the true complexity of the multi-level interactions driving the price dynamics.
By adopting a state-of-the-art second-order optimization algorithm, we train a Bayesian bilinear neural network with temporal attention.
By addressing the use of predictive distributions to analyze errors and uncertainties associated with the estimated parameters and model forecasts, we thoroughly compare our Bayesian model with traditional ML alternatives.
arXiv Detail & Related papers (2022-03-07T18:59:54Z) - Adaptive learning for financial markets mixing model-based and
model-free RL for volatility targeting [0.0]
Model-Free Reinforcement Learning has achieved meaningful results in stable environments but, to this day, it remains problematic in regime changing environments like financial markets.
We propose to combine the best of the two techniques by selecting various model-based approaches thanks to Model-Free Deep Reinforcement Learning.
arXiv Detail & Related papers (2021-04-19T19:20:22Z) - Universal Trading for Order Execution with Oracle Policy Distillation [99.57416828489568]
We propose a novel universal trading policy optimization framework to bridge the gap between the noisy yet imperfect market states and the optimal action sequences for order execution.
We show that our framework can better guide the learning of the common policy towards practically optimal execution by an oracle teacher with perfect information.
arXiv Detail & Related papers (2021-01-28T05:52:18Z) - Deep Learning for Portfolio Optimization [5.833272638548154]
Instead of selecting individual assets, we trade Exchange-Traded Funds (ETFs) of market indices to form a portfolio.
We compare our method with a wide range of algorithms with results showing that our model obtains the best performance over the testing period.
arXiv Detail & Related papers (2020-05-27T21:28:43Z) - Model-Augmented Actor-Critic: Backpropagating through Paths [81.86992776864729]
Current model-based reinforcement learning approaches use the model simply as a learned black-box simulator.
We show how to make more effective use of the model by exploiting its differentiability.
arXiv Detail & Related papers (2020-05-16T19:18:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.