Applying Reinforcement Learning to Option Pricing and Hedging
- URL: http://arxiv.org/abs/2310.04336v1
- Date: Fri, 6 Oct 2023 15:59:12 GMT
- Title: Applying Reinforcement Learning to Option Pricing and Hedging
- Authors: Zoran Stoiljkovic
- Abstract summary: This thesis provides an overview of the recent advances in reinforcement learning in pricing and hedging financial instruments.
It bridges the traditional Black and Scholes (1973) model with novel artificial intelligence algorithms, enabling option pricing and hedging in a completely model-free and data-driven way.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This thesis provides an overview of the recent advances in reinforcement
learning in pricing and hedging financial instruments, with a primary focus on
a detailed explanation of the Q-Learning Black Scholes approach, introduced by
Halperin (2017). This reinforcement learning approach bridges the traditional
Black and Scholes (1973) model with novel artificial intelligence algorithms,
enabling option pricing and hedging in a completely model-free and data-driven
way. This paper also explores the algorithm's performance under different state
variables and scenarios for a European put option. The results reveal that the
model is an accurate estimator under different levels of volatility and hedging
frequency. Moreover, this method exhibits robust performance across various
levels of option's moneyness. Lastly, the algorithm incorporates proportional
transaction costs, indicating diverse impacts on profit and loss, affected by
different statistical properties of the state variables.
Related papers
- Jump Diffusion-Informed Neural Networks with Transfer Learning for Accurate American Option Pricing under Data Scarcity [1.998862666797032]
This study presents a comprehensive framework for American option pricing consisting of six interrelated modules.
The framework combines nonlinear optimization algorithms, analytical and numerical models, and neural networks to improve pricing performance.
The proposed model shows superior performance in pricing deep out-of-the-money options.
arXiv Detail & Related papers (2024-09-26T17:50:12Z) - Pricing American Options using Machine Learning Algorithms [0.0]
This study investigates the application of machine learning algorithms to pricing American options using Monte Carlo simulations.
Traditional models, such as the Black-Scholes-Merton framework, often fail to adequately address the complexities of American options.
By leveraging Monte Carlo methods in conjunction with Least Square Method machine learning was used.
arXiv Detail & Related papers (2024-09-05T02:52:11Z) - Feature Alignment: Rethinking Efficient Active Learning via Proxy in the
Context of Pre-trained Models [5.2976735459795385]
Fine-tuning the pre-trained model with active learning holds promise for reducing annotation costs.
Recent research has proposed proxy-based active learning, which pre-computes features to reduce computational costs.
This approach often incurs a significant loss in active learning performance, which may even outweigh the computational cost savings.
arXiv Detail & Related papers (2024-03-02T06:01:34Z) - BAL: Balancing Diversity and Novelty for Active Learning [53.289700543331925]
We introduce a novel framework, Balancing Active Learning (BAL), which constructs adaptive sub-pools to balance diverse and uncertain data.
Our approach outperforms all established active learning methods on widely recognized benchmarks by 1.20%.
arXiv Detail & Related papers (2023-12-26T08:14:46Z) - Overcoming Overconfidence for Active Learning [1.2776312584227847]
We present two novel methods to address the problem of overconfidence that arises in the active learning scenario.
The first is an augmentation strategy named Cross-Mix-and-Mix (CMaM), which aims to calibrate the model by expanding the limited training distribution.
The second is a selection strategy named Ranked Margin Sampling (RankedMS), which prevents choosing data that leads to overly confident predictions.
arXiv Detail & Related papers (2023-08-21T09:04:54Z) - ASPEST: Bridging the Gap Between Active Learning and Selective
Prediction [56.001808843574395]
Selective prediction aims to learn a reliable model that abstains from making predictions when uncertain.
Active learning aims to lower the overall labeling effort, and hence human dependence, by querying the most informative examples.
In this work, we introduce a new learning paradigm, active selective prediction, which aims to query more informative samples from the shifted target domain.
arXiv Detail & Related papers (2023-04-07T23:51:07Z) - Improved Regret for Efficient Online Reinforcement Learning with Linear
Function Approximation [69.0695698566235]
We study reinforcement learning with linear function approximation and adversarially changing cost functions.
We present a computationally efficient policy optimization algorithm for the challenging general setting of unknown dynamics and bandit feedback.
arXiv Detail & Related papers (2023-01-30T17:26:39Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Instance-optimality in optimal value estimation: Adaptivity via
variance-reduced Q-learning [99.34907092347733]
We analyze the problem of estimating optimal $Q$-value functions for a discounted Markov decision process with discrete states and actions.
Using a local minimax framework, we show that this functional arises in lower bounds on the accuracy on any estimation procedure.
In the other direction, we establish the sharpness of our lower bounds, up to factors logarithmic in the state and action spaces, by analyzing a variance-reduced version of $Q$-learning.
arXiv Detail & Related papers (2021-06-28T00:38:54Z) - Toward Optimal Probabilistic Active Learning Using a Bayesian Approach [4.380488084997317]
Active learning aims at reducing the labeling costs by an efficient and effective allocation of costly labeling resources.
By reformulating existing selection strategies within our proposed model, we can explain which aspects are not covered in current state-of-the-art.
arXiv Detail & Related papers (2020-06-02T15:59:42Z) - Cost-Sensitive Portfolio Selection via Deep Reinforcement Learning [100.73223416589596]
We propose a cost-sensitive portfolio selection method with deep reinforcement learning.
Specifically, a novel two-stream portfolio policy network is devised to extract both price series patterns and asset correlations.
A new cost-sensitive reward function is developed to maximize the accumulated return and constrain both costs via reinforcement learning.
arXiv Detail & Related papers (2020-03-06T06:28:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.