Enhancing Black-Scholes Delta Hedging via Deep Learning
- URL: http://arxiv.org/abs/2407.19367v2
- Date: Sat, 24 Aug 2024 01:44:41 GMT
- Title: Enhancing Black-Scholes Delta Hedging via Deep Learning
- Authors: Chunhui Qiao, Xiangwei Wan,
- Abstract summary: This paper proposes a deep delta hedging framework for options, utilizing neural networks to learn the residuals between the hedging function and the implied Black-Scholes delta.
Our empirical analysis demonstrates that learning the residuals, using the mean squared one-step hedging error as the loss function, significantly improves hedging performance over directly learning the hedging function, often by more than 100%.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper proposes a deep delta hedging framework for options, utilizing neural networks to learn the residuals between the hedging function and the implied Black-Scholes delta. This approach leverages the smoother properties of these residuals, enhancing deep learning performance. Utilizing ten years of daily S&P 500 index option data, our empirical analysis demonstrates that learning the residuals, using the mean squared one-step hedging error as the loss function, significantly improves hedging performance over directly learning the hedging function, often by more than 100%. Adding input features when learning the residuals enhances hedging performance more for puts than calls, with market sentiment being less crucial. Furthermore, learning the residuals with three years of data matches the hedging performance of directly learning with ten years of data, proving that our method demands less data.
Related papers
- Improving Data-aware and Parameter-aware Robustness for Continual Learning [3.480626767752489]
This paper analyzes that this insufficiency arises from the ineffective handling of outliers.
We propose a Robust Continual Learning (RCL) method to address this issue.
The proposed method effectively maintains robustness and achieves new state-of-the-art (SOTA) results.
arXiv Detail & Related papers (2024-05-27T11:21:26Z) - PUMA: margin-based data pruning [51.12154122266251]
We focus on data pruning, where some training samples are removed based on the distance to the model classification boundary (i.e., margin)
We propose PUMA, a new data pruning strategy that computes the margin using DeepFool.
We show that PUMA can be used on top of the current state-of-the-art methodology in robustness, and it is able to significantly improve the model performance unlike the existing data pruning strategies.
arXiv Detail & Related papers (2024-05-10T08:02:20Z) - Improving Diffusion Models's Data-Corruption Resistance using Scheduled Pseudo-Huber Loss [5.539965805440292]
We propose an alternative diffusion loss function, which can preserve the high quality of generated data while being robust to outliers.
We show that pseudo-Huber loss with the time-dependent parameter exhibits better performance on corrupted datasets in both image and audio domains.
arXiv Detail & Related papers (2024-03-25T13:02:43Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for
Incremental Learning [100.7407460674153]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.
To mitigate the problem, a line of methods propose to replay the data of experienced tasks when learning new tasks.
However, it is not expected in practice considering the memory constraint or data privacy issue.
As a replacement, data-free data replay methods are proposed by inverting samples from the classification model.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - EsaCL: Efficient Continual Learning of Sparse Models [10.227171407348326]
Key challenge in the continual learning setting is to efficiently learn a sequence of tasks without forgetting how to perform previously learned tasks.
We propose a new method for efficient continual learning of sparse models (EsaCL) that can automatically prune redundant parameters without adversely impacting the model's predictive power.
arXiv Detail & Related papers (2024-01-11T04:59:44Z) - Temperature Schedules for Self-Supervised Contrastive Methods on
Long-Tail Data [87.77128754860983]
In this paper, we analyse the behaviour of one of the most popular variants of self-supervised learning (SSL) on long-tail data.
We find that a large $tau$ emphasises group-wise discrimination, whereas a small $tau$ leads to a higher degree of instance discrimination.
We propose to employ a dynamic $tau$ and show that a simple cosine schedule can yield significant improvements in the learnt representations.
arXiv Detail & Related papers (2023-03-23T20:37:25Z) - Alternate Loss Functions for Classification and Robust Regression Can Improve the Accuracy of Artificial Neural Networks [6.452225158891343]
This paper shows that training speed and final accuracy of neural networks can significantly depend on the loss function used to train neural networks.
Two new classification loss functions that significantly improve performance on a wide variety of benchmark tasks are proposed.
arXiv Detail & Related papers (2023-03-17T12:52:06Z) - Invariance Learning in Deep Neural Networks with Differentiable Laplace
Approximations [76.82124752950148]
We develop a convenient gradient-based method for selecting the data augmentation.
We use a differentiable Kronecker-factored Laplace approximation to the marginal likelihood as our objective.
arXiv Detail & Related papers (2022-02-22T02:51:11Z) - Data-driven Hedging of Stock Index Options via Deep Learning [6.952039070065292]
We develop deep learning models to learn the hedge ratio for S&P500 index options directly from options data.
We compare different combinations of features and show that a feedforward neural network model with time to maturity, Black-Scholes delta and a sentiment variable performs the best in the out-of-sample test.
arXiv Detail & Related papers (2021-11-05T12:53:47Z) - Machine Unlearning of Features and Labels [72.81914952849334]
We propose first scenarios for unlearning and labels in machine learning models.
Our approach builds on the concept of influence functions and realizes unlearning through closed-form updates of model parameters.
arXiv Detail & Related papers (2021-08-26T04:42:24Z) - Deep Stock Predictions [58.720142291102135]
We consider the design of a trading strategy that performs portfolio optimization using Long Short Term Memory (LSTM) neural networks.
We then customize the loss function used to train the LSTM to increase the profit earned.
We find the LSTM model with the customized loss function to have an improved performance in the training bot over a regressive baseline such as ARIMA.
arXiv Detail & Related papers (2020-06-08T23:37:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.