Learning parameter dependence for Fourier-based option pricing with tensor trains
- URL: http://arxiv.org/abs/2405.00701v7
- Date: Thu, 31 Oct 2024 09:07:34 GMT
- Title: Learning parameter dependence for Fourier-based option pricing with tensor trains
- Authors: Rihito Sakurai, Haruto Takahashi, Koichi Miyamoto,
- Abstract summary: We propose a pricing method, where, by a tensor train learning algorithm, we build tensor trains that approximate functions appearing in FT-based option pricing.
As a benchmark test, we run the proposed method to price a multi-asset option for the various values of volatilities and present asset prices.
We show that, in the tested cases involving up to 11 assets, the proposed method outperforms Monte Carlo-based option pricing with $106$ paths in terms of computational complexity.
- Score: 0.0
- License:
- Abstract: A long-standing issue in mathematical finance is the speed-up of option pricing, especially for multi-asset options. A recent study has proposed to use tensor train learning algorithms to speed up Fourier transform (FT)-based option pricing, utilizing the ability of tensor trains to compress high-dimensional tensors. Another usage of the tensor train is to compress functions, including their parameter dependence. Here, we propose a pricing method, where, by a tensor train learning algorithm, we build tensor trains that approximate functions appearing in FT-based option pricing with their parameter dependence and efficiently calculate the option price for the varying input parameters. As a benchmark test, we run the proposed method to price a multi-asset option for the various values of volatilities and present asset prices. We show that, in the tested cases involving up to 11 assets, the proposed method outperforms Monte Carlo-based option pricing with $10^6$ paths in terms of computational complexity while keeping better accuracy.
Related papers
- FTuner: A Fast Dynamic Shape Tensors Program Auto-Tuner for Deep Learning Compilers [6.194917248699324]
This paper proposes a new technique for deep learning compilers called FTuner.
Experiments show that the FTuner can achieve comparable operators and end-to-end performance to vendor libraries.
arXiv Detail & Related papers (2024-07-31T08:05:33Z) - SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors [80.6043267994434]
We propose SVFT, a simple approach that fundamentally differs from existing methods.
SVFT updates (W) as a sparse combination of outer products of its singular vectors, training only the coefficients (scales) of these sparse combinations.
Experiments on language and vision benchmarks show that SVFT recovers up to 96% of full fine-tuning performance while training only 0.006 to 0.25% of parameters.
arXiv Detail & Related papers (2024-05-30T01:27:43Z) - AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient Fine-Tuning of Large Models [5.981614673186146]
We present a novel.
-Efficient Fine-Tuning (PEFT) method, dubbed as Adaptive Freezing of Low Rank Adaptation (AFLoRA)
Specifically, we add a parallel path of trainable low-rank matrices, namely a down-projection and an up-projection matrix, each of which is followed by a feature transformation vector.
Our experimental results demonstrate that we can achieve state-of-the-art performance with an average improvement of up to $0.85%$ as evaluated on GLUE benchmark.
arXiv Detail & Related papers (2024-03-20T03:07:50Z) - Dynamic Layer Tying for Parameter-Efficient Transformers [65.268245109828]
We employ Reinforcement Learning to select layers during training and tie them together.
This facilitates weight sharing, reduces the number of trainable parameters, and also serves as an effective regularization technique.
In particular, the memory consumption during training is up to one order of magnitude less than the conventional training method.
arXiv Detail & Related papers (2024-01-23T14:53:20Z) - Faster Robust Tensor Power Method for Arbitrary Order [15.090593955414137]
emphTensor power method (TPM) is one of the widely-used techniques in the decomposition of tensors.
We apply sketching method, and we are able to achieve the running time of $widetildeO(np-1)$ on the power $p$ and dimension $n$ tensor.
arXiv Detail & Related papers (2023-06-01T07:12:00Z) - Multi-Rate VAE: Train Once, Get the Full Rate-Distortion Curve [29.86440019821837]
Variational autoencoders (VAEs) are powerful tools for learning latent representations of data used in a wide range of applications.
In this paper, we introduce Multi-Rate VAE, a computationally efficient framework for learning optimal parameters corresponding to various $beta$ in a single training run.
arXiv Detail & Related papers (2022-12-07T19:02:34Z) - Near-Linear Time and Fixed-Parameter Tractable Algorithms for Tensor
Decompositions [51.19236668224547]
We study low rank approximation of tensors, focusing on the tensor train and Tucker decompositions.
For tensor train decomposition, we give a bicriteria $(1 + eps)$-approximation algorithm with a small bicriteria rank and $O(q cdot nnz(A))$ running time.
In addition, we extend our algorithm to tensor networks with arbitrary graphs.
arXiv Detail & Related papers (2022-07-15T11:55:09Z) - Spectral Tensor Train Parameterization of Deep Learning Layers [136.4761580842396]
We study low-rank parameterizations of weight matrices with embedded spectral properties in the Deep Learning context.
We show the effects of neural network compression in the classification setting and both compression and improved stability training in the generative adversarial training setting.
arXiv Detail & Related papers (2021-03-07T00:15:44Z) - Tensor Completion via Tensor Networks with a Tucker Wrapper [28.83358353043287]
We propose to solve low-rank tensor completion (LRTC) via tensor networks with a Tucker wrapper.
A two-level alternative least square method is then employed to update the unknown factors.
Numerical simulations show that the proposed algorithm is comparable with state-of-the-art methods.
arXiv Detail & Related papers (2020-10-29T17:54:01Z) - Beyond Lazy Training for Over-parameterized Tensor Decomposition [69.4699995828506]
We show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
Our results show that gradient descent on over-parametrized objective could go beyond the lazy training regime and utilize certain low-rank structure in the data.
arXiv Detail & Related papers (2020-10-22T00:32:12Z) - Length-Adaptive Transformer: Train Once with Length Drop, Use Anytime
with Search [84.94597821711808]
We extend PoWER-BERT (Goyal et al., 2020) and propose Length-Adaptive Transformer that can be used for various inference scenarios after one-shot training.
We conduct a multi-objective evolutionary search to find a length configuration that maximizes the accuracy and minimizes the efficiency metric under any given computational budget.
We empirically verify the utility of the proposed approach by demonstrating the superior accuracy-efficiency trade-off under various setups.
arXiv Detail & Related papers (2020-10-14T12:28:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.