Mathematical Modeling of Option Pricing with an Extended Black-Scholes Framework
- URL: http://arxiv.org/abs/2504.03175v2
- Date: Sun, 13 Apr 2025 12:42:54 GMT
- Title: Mathematical Modeling of Option Pricing with an Extended Black-Scholes Framework
- Authors: Nikhil Shivakumar Nayak,
- Abstract summary: This study investigates enhancing option pricing by extending the Black-Scholes model to include volatility and interest rate variability.<n>The extended Black-Scholes model and a machine learning-based LSTM model are developed and evaluated for pricing Google stock options.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study investigates enhancing option pricing by extending the Black-Scholes model to include stochastic volatility and interest rate variability within the Partial Differential Equation (PDE). The PDE is solved using the finite difference method. The extended Black-Scholes model and a machine learning-based LSTM model are developed and evaluated for pricing Google stock options. Both models were backtested using historical market data. While the LSTM model exhibited higher predictive accuracy, the finite difference method demonstrated superior computational efficiency. This work provides insights into model performance under varying market conditions and emphasizes the potential of hybrid approaches for robust financial modeling.
Related papers
- The AI Black-Scholes: Finance-Informed Neural Network [11.339331636751329]
In option pricing, existing models are typically classified into principle-driven methods and data-driven approaches.<n>In contrast, data-driven models excel in capturing market data trends, but they often lack alignment with core financial principles.<n>This work proposes a hybrid approach to address these limitations by integrating the strengths of both principled and data-driven methodologies.
arXiv Detail & Related papers (2024-12-15T22:40:40Z) - On conditional diffusion models for PDE simulations [53.01911265639582]
We study score-based diffusion models for forecasting and assimilation of sparse observations.
We propose an autoregressive sampling approach that significantly improves performance in forecasting.
We also propose a new training strategy for conditional score-based models that achieves stable performance over a range of history lengths.
arXiv Detail & Related papers (2024-10-21T18:31:04Z) - Black-Box Tuning of Vision-Language Models with Effective Gradient
Approximation [71.21346469382821]
We introduce collaborative black-box tuning (CBBT) for both textual prompt optimization and output feature adaptation for black-box models.
CBBT is extensively evaluated on eleven downstream benchmarks and achieves remarkable improvements compared to existing black-box VL adaptation methods.
arXiv Detail & Related papers (2023-12-26T06:31:28Z) - COPlanner: Plan to Roll Out Conservatively but to Explore Optimistically
for Model-Based RL [50.385005413810084]
Dyna-style model-based reinforcement learning contains two phases: model rollouts to generate sample for policy learning and real environment exploration.
$textttCOPlanner$ is a planning-driven framework for model-based methods to address the inaccurately learned dynamics model problem.
arXiv Detail & Related papers (2023-10-11T06:10:07Z) - Estimating risks of option books using neural-SDE market models [6.319314191226118]
We use an arbitrage-free neural-SDE market model to produce realistic scenarios for the joint dynamics of multiple European options on a single underlying.
We show that our models are more computationally efficient and accurate for evaluating the Value-at-Risk (VaR) of option portfolios, with better coverage performance and less procyclicality than standard filtered historical simulation approaches.
arXiv Detail & Related papers (2022-02-15T02:39:42Z) - Arbitrage-free neural-SDE market models [6.145654286950278]
We develop a nonparametric model for the European options book respecting underlying financial constraints.
We study the inference problem where a model is learnt from discrete time series data of stock and option prices.
We use neural networks as function approximators for the drift and diffusion of the modelled SDE system.
arXiv Detail & Related papers (2021-05-24T00:53:10Z) - Design of Dynamic Experiments for Black-Box Model Discrimination [72.2414939419588]
Consider a dynamic model discrimination setting where we wish to chose: (i) what is the best mechanistic, time-varying model and (ii) what are the best model parameter estimates.
For rival mechanistic models where we have access to gradient information, we extend existing methods to incorporate a wider range of problem uncertainty.
We replace these black-box models with Gaussian process surrogate models and thereby extend the model discrimination setting to additionally incorporate rival black-box model.
arXiv Detail & Related papers (2021-02-07T11:34:39Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - Robust pricing and hedging via neural SDEs [0.0]
We develop and analyse novel algorithms needed for efficient use of neural SDEs.
We find robust bounds for prices of derivatives and the corresponding hedging strategies while incorporating relevant market data.
Neural SDEs allow consistent calibration under both the risk-neutral and the real-world measures.
arXiv Detail & Related papers (2020-07-08T14:33:17Z) - Hedging and machine learning driven crude oil data analysis using a
refined Barndorff-Nielsen and Shephard model [0.38073142980732994]
In this paper, a refined Barndorff-Nielsen and Shephard (BN-S) model is implemented to find an optimal hedging strategy for commodity markets.
The refinement leads to the extraction of a deterministic parameter from the empirical data set.
With the implementation of this parameter in the refined model, the resulting model performs much better than the classical BN-S model.
arXiv Detail & Related papers (2020-04-29T15:45:58Z) - Learnable Bernoulli Dropout for Bayesian Deep Learning [53.79615543862426]
Learnable Bernoulli dropout (LBD) is a new model-agnostic dropout scheme that considers the dropout rates as parameters jointly optimized with other model parameters.
LBD leads to improved accuracy and uncertainty estimates in image classification and semantic segmentation.
arXiv Detail & Related papers (2020-02-12T18:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.