Quantum-Inspired Tensor Neural Networks for Option Pricing
- URL: http://arxiv.org/abs/2212.14076v2
- Date: Sun, 10 Mar 2024 21:29:53 GMT
- Title: Quantum-Inspired Tensor Neural Networks for Option Pricing
- Authors: Raj G. Patel, Chia-Wei Hsing, Serkan Sahin, Samuel Palmer, Saeed S.
Jahromi, Shivam Sharma, Tomas Dominguez, Kris Tziritas, Christophe Michel,
Vincent Porte, Mustafa Abid, Stephane Aubert, Pierre Castellani, Samuel
Mugel, Roman Orus
- Abstract summary: Recent advances in deep learning have enabled us to address the curse of dimensionality (COD) by solving problems in higher dimensions.
A subset of such approaches of addressing the COD has led us to solving high-dimensional PDEs.
This has resulted in opening doors to solving a variety of real-world problems ranging from mathematical finance to control for industrial applications.
- Score: 4.3942901219301564
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in deep learning have enabled us to address the curse of
dimensionality (COD) by solving problems in higher dimensions. A subset of such
approaches of addressing the COD has led us to solving high-dimensional PDEs.
This has resulted in opening doors to solving a variety of real-world problems
ranging from mathematical finance to stochastic control for industrial
applications. Although feasible, these deep learning methods are still
constrained by training time and memory. Tackling these shortcomings, Tensor
Neural Networks (TNN) demonstrate that they can provide significant parameter
savings while attaining the same accuracy as compared to the classical Dense
Neural Network (DNN). In addition, we also show how TNN can be trained faster
than DNN for the same accuracy. Besides TNN, we also introduce Tensor Network
Initializer (TNN Init), a weight initialization scheme that leads to faster
convergence with smaller variance for an equivalent parameter count as compared
to a DNN. We benchmark TNN and TNN Init by applying them to solve the parabolic
PDE associated with the Heston model, which is widely used in financial pricing
theory.
Related papers
- Make Me a BNN: A Simple Strategy for Estimating Bayesian Uncertainty
from Pre-trained Models [40.38541033389344]
Deep Neural Networks (DNNs) are powerful tools for various computer vision tasks, yet they often struggle with reliable uncertainty quantification.
We introduce the Adaptable Bayesian Neural Network (ABNN), a simple and scalable strategy to seamlessly transform DNNs into BNNs.
We conduct extensive experiments across multiple datasets for image classification and semantic segmentation tasks, and our results demonstrate that ABNN achieves state-of-the-art performance.
arXiv Detail & Related papers (2023-12-23T16:39:24Z) - On the Computational Complexity and Formal Hierarchy of Second Order
Recurrent Neural Networks [59.85314067235965]
We extend the theoretical foundation for the $2nd$-order recurrent network ($2nd$ RNN)
We prove there exists a class of a $2nd$ RNN that is Turing-complete with bounded time.
We also demonstrate that $2$nd order RNNs, without memory, outperform modern-day models such as vanilla RNNs and gated recurrent units in recognizing regular grammars.
arXiv Detail & Related papers (2023-09-26T06:06:47Z) - An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks [13.271286153792058]
Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case.
This paper presents an automata-theoretic approach to synthesizing BNNs that meet designated properties.
arXiv Detail & Related papers (2023-07-29T06:27:28Z) - Quantum-Inspired Tensor Neural Networks for Partial Differential
Equations [5.963563752404561]
Deep learning methods are constrained by training time and memory. To tackle these shortcomings, we implement Neural Networks (TNN)
We demonstrate that TNN provide significant parameter savings while attaining the same accuracy as compared to the classical Neural Network (DNN)
arXiv Detail & Related papers (2022-08-03T17:41:11Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Pruning and Slicing Neural Networks using Formal Verification [0.2538209532048866]
Deep neural networks (DNNs) play an increasingly important role in various computer systems.
In order to create these networks, engineers typically specify a desired topology, and then use an automated training algorithm to select the network's weights.
Here, we propose to address this challenge by harnessing recent advances in DNN verification.
arXiv Detail & Related papers (2021-05-28T07:53:50Z) - TaxoNN: A Light-Weight Accelerator for Deep Neural Network Training [2.5025363034899732]
We present a novel approach to add the training ability to a baseline DNN accelerator (inference only) by splitting the SGD algorithm into simple computational elements.
Based on this approach we propose TaxoNN, a light-weight accelerator for DNN training.
Our experimental results show that TaxoNN delivers, on average, 0.97% higher misclassification rate compared to a full-precision implementation.
arXiv Detail & Related papers (2020-10-11T09:04:19Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.