Tensor train representations of Greeks for Fourier-based pricing of multi-asset options
- URL: http://arxiv.org/abs/2507.08482v1
- Date: Fri, 11 Jul 2025 10:51:17 GMT
- Title: Tensor train representations of Greeks for Fourier-based pricing of multi-asset options
- Authors: Rihito Sakurai, Koichi Miyamoto, Tsuyoshi Okubo,
- Abstract summary: Efficient computation of Greeks for multi-asset options remains a key challenge in quantitative finance.<n>We propose a framework to compute Greeks in a single evaluation of a tensor train (TT)<n> Numerical experiments on a five-asset min-call option in the Black-Sholes model show significant speed-ups of up to about $105 times$ over Monte Carlo simulation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Efficient computation of Greeks for multi-asset options remains a key challenge in quantitative finance. While Monte Carlo (MC) simulation is widely used, it suffers from the large sample complexity for high accuracy. We propose a framework to compute Greeks in a single evaluation of a tensor train (TT), which is obtained by compressing the Fourier transform (FT)-based pricing function via TT learning using tensor cross interpolation. Based on this TT representation, we introduce two approaches to compute Greeks: a numerical differentiation (ND) approach that applies a numerical differential operator to one tensor core and an analytical (AN) approach that constructs the TT of closed-form differentiation expressions of FT-based pricing. Numerical experiments on a five-asset min-call option in the Black-Sholes model show significant speed-ups of up to about $10^{5} \times$ over MC while maintaining comparable accuracy. The ND approach matches or exceeds the accuracy of the AN approach and requires lower computational complexity for constructing the TT representation, making it the preferred choice.
Related papers
- Tensor Decomposition Networks for Fast Machine Learning Interatomic Potential Computations [63.945006006152035]
tensor decomposition networks (TDNs) achieve competitive performance with dramatic speedup in computations.<n>We evaluate TDNs on PubChemQCR, a newly curated molecular relaxation dataset containing 105 million DFT-calculated snapshots.
arXiv Detail & Related papers (2025-07-01T18:46:27Z) - Fractured Chain-of-Thought Reasoning [61.647243580650446]
We introduce Fractured Sampling, a unified inference-time strategy that interpolates between full CoT and solution-only sampling.<n>We show that Fractured Sampling consistently achieves superior accuracy-cost trade-offs, yielding steep log-linear scaling gains in Pass@k versus token budget.
arXiv Detail & Related papers (2025-05-19T11:30:41Z) - Benefits of Learning Rate Annealing for Tuning-Robustness in Stochastic Optimization [29.174036532175855]
Learning rate in gradient methods is a critical hyperspecification that is notoriously costly to tune via standard grid search.<n>We identify a theoretical advantage of learning rate annealing schemes that decay the learning rate to zero at a rate, such as the widely-used cosine schedule.
arXiv Detail & Related papers (2025-03-12T14:06:34Z) - Unveiling the Statistical Foundations of Chain-of-Thought Prompting Methods [59.779795063072655]
Chain-of-Thought (CoT) prompting and its variants have gained popularity as effective methods for solving multi-step reasoning problems.
We analyze CoT prompting from a statistical estimation perspective, providing a comprehensive characterization of its sample complexity.
arXiv Detail & Related papers (2024-08-25T04:07:18Z) - Learning parameter dependence for Fourier-based option pricing with tensor trains [0.0]
We propose a pricing method, where, by a tensor train learning algorithm, we build tensor trains that approximate functions appearing in FT-based option pricing.<n>As a benchmark test, we run the proposed method to price a multi-asset option for the various values of volatilities and present asset prices.<n>We show that, in the tested cases involving up to 11 assets, the proposed method outperforms Monte Carlo-based option pricing with $106$ paths in terms of computational complexity.
arXiv Detail & Related papers (2024-04-17T01:57:19Z) - D4FT: A Deep Learning Approach to Kohn-Sham Density Functional Theory [79.50644650795012]
We propose a deep learning approach to solve Kohn-Sham Density Functional Theory (KS-DFT)
We prove that such an approach has the same expressivity as the SCF method, yet reduces the computational complexity.
In addition, we show that our approach enables us to explore more complex neural-based wave functions.
arXiv Detail & Related papers (2023-03-01T10:38:10Z) - Stochastic optimal transport in Banach Spaces for regularized estimation of multivariate quantiles [0.0]
We introduce a new algorithm for solving entropic optimal transport (EOT) between two absolutely continuous probability measures $mu$ and $nu$.<n>We study the almost sure convergence of our algorithm that takes its values in an infinite-dimensional Banach space.
arXiv Detail & Related papers (2023-02-02T10:02:01Z) - Matching Pursuit Based Scheduling for Over-the-Air Federated Learning [67.59503935237676]
This paper develops a class of low-complexity device scheduling algorithms for over-the-air learning via the method of federated learning.
Compared to the state-of-the-art proposed scheme, the proposed scheme poses a drastically lower efficiency system.
The efficiency of the proposed scheme is confirmed via experiments on the CIFAR dataset.
arXiv Detail & Related papers (2022-06-14T08:14:14Z) - Permutation Compressors for Provably Faster Distributed Nonconvex
Optimization [68.8204255655161]
We show that the MARINA method of Gorbunov et al (2021) can be considered as a state-of-the-art method in terms of theoretical communication complexity.
Theory of MARINA to support the theory of potentially em correlated compressors, extends to the method beyond the classical independent compressors setting.
arXiv Detail & Related papers (2021-10-07T09:38:15Z) - Provable Tensor-Train Format Tensor Completion by Riemannian
Optimization [22.166436026482984]
We provide the first theoretical guarantees of the convergence of RGrad algorithm for TT-format tensor completion.
We also propose a novel approach, referred to as the sequential second-order moment method.
arXiv Detail & Related papers (2021-08-27T08:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.