Data-driven Hedging of Stock Index Options via Deep Learning
- URL: http://arxiv.org/abs/2111.03477v1
- Date: Fri, 5 Nov 2021 12:53:47 GMT
- Title: Data-driven Hedging of Stock Index Options via Deep Learning
- Authors: Jie Chen, Lingfei Li
- Abstract summary: We develop deep learning models to learn the hedge ratio for S&P500 index options directly from options data.
We compare different combinations of features and show that a feedforward neural network model with time to maturity, Black-Scholes delta and a sentiment variable performs the best in the out-of-sample test.
- Score: 6.952039070065292
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop deep learning models to learn the hedge ratio for S&P500 index
options directly from options data. We compare different combinations of
features and show that a feedforward neural network model with time to
maturity, Black-Scholes delta and a sentiment variable (VIX for calls and index
return for puts) as input features performs the best in the out-of-sample test.
This model significantly outperforms the standard hedging practice that uses
the Black-Scholes delta and a recent data-driven model. Our results demonstrate
the importance of market sentiment for hedging efficiency, a factor previously
ignored in developing hedging strategies.
Related papers
- Enhancing Black-Scholes Delta Hedging via Deep Learning [0.0]
This paper proposes a deep delta hedging framework for options, utilizing neural networks to learn the residuals between the hedging function and the implied Black-Scholes delta.
Our empirical analysis demonstrates that learning the residuals, using the mean squared one-step hedging error as the loss function, significantly improves hedging performance over directly learning the hedging function, often by more than 100%.
arXiv Detail & Related papers (2024-07-28T02:29:51Z) - GraphCNNpred: A stock market indices prediction using a Graph based deep learning system [0.0]
We give a graph neural network based convolutional neural network (CNN) model, that can be applied on diverse source of data, in the attempt to extract features to predict the trends of indices of textS&textP 500, NASDAQ, DJI, NYSE, and RUSSEL.
Experiments show that the associated models improve the performance of prediction in all indices over the baseline algorithms by about $4% text to 15%$, in terms of F-measure.
arXiv Detail & Related papers (2024-07-04T09:14:24Z) - Unpacking DPO and PPO: Disentangling Best Practices for Learning from Preference Feedback [110.16220825629749]
Learning from preference feedback has emerged as an essential step for improving the generation quality and performance of modern language models.
In this work, we identify four core aspects of preference-based learning: preference data, learning algorithm, reward model, and policy training prompts.
Our findings indicate that all aspects are important for performance, with better preference data leading to the largest improvements.
arXiv Detail & Related papers (2024-06-13T16:17:21Z) - LESS: Selecting Influential Data for Targeted Instruction Tuning [64.78894228923619]
We propose LESS, an efficient algorithm to estimate data influences and perform Low-rank gradiEnt Similarity Search for instruction data selection.
We show that training on a LESS-selected 5% of the data can often outperform training on the full dataset across diverse downstream tasks.
Our method goes beyond surface form cues to identify data that the necessary reasoning skills for the intended downstream application.
arXiv Detail & Related papers (2024-02-06T19:18:04Z) - Robust Learning with Progressive Data Expansion Against Spurious
Correlation [65.83104529677234]
We study the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features.
Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process.
We propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance.
arXiv Detail & Related papers (2023-06-08T05:44:06Z) - Volatility forecasting using Deep Learning and sentiment analysis [0.0]
This paper presents a composite model that merges a deep learning approach with sentiment analysis for predicting market volatility.
We then describe a composite forecasting model, a Long-Short-Term-Memory Neural Network method, to use historical sentiment and the previous day's volatility to make forecasts.
arXiv Detail & Related papers (2022-10-22T14:54:33Z) - DeepVol: Volatility Forecasting from High-Frequency Data with Dilated Causal Convolutions [53.37679435230207]
We propose DeepVol, a model based on Dilated Causal Convolutions that uses high-frequency data to forecast day-ahead volatility.
Our empirical results suggest that the proposed deep learning-based approach effectively learns global features from high-frequency data.
arXiv Detail & Related papers (2022-09-23T16:13:47Z) - Hedging option books using neural-SDE market models [6.319314191226118]
We show that neural-SDE market models achieve lower hedging errors than Black--Scholes delta and delta-vega hedging consistently over time.
In addition, hedging using market models leads to similar performance to hedging using Heston models, while the former tends to be more robust during stressed market periods.
arXiv Detail & Related papers (2022-05-31T17:48:18Z) - Stock Index Prediction using Cointegration test and Quantile Loss [0.0]
We propose a method that can get better performance in terms of returns when selecting informative factors.
We compare the two RNN variants with quantile loss with only five factors obtained through the cointegration test.
Our experimental results show that our proposed method outperforms the other conventional approaches.
arXiv Detail & Related papers (2021-09-29T16:20:29Z) - Back2Future: Leveraging Backfill Dynamics for Improving Real-time
Predictions in Future [73.03458424369657]
In real-time forecasting in public health, data collection is a non-trivial and demanding task.
'Backfill' phenomenon and its effect on model performance has been barely studied in the prior literature.
We formulate a novel problem and neural framework Back2Future that aims to refine a given model's predictions in real-time.
arXiv Detail & Related papers (2021-06-08T14:48:20Z) - Learnable Bernoulli Dropout for Bayesian Deep Learning [53.79615543862426]
Learnable Bernoulli dropout (LBD) is a new model-agnostic dropout scheme that considers the dropout rates as parameters jointly optimized with other model parameters.
LBD leads to improved accuracy and uncertainty estimates in image classification and semantic segmentation.
arXiv Detail & Related papers (2020-02-12T18:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.