Stochastic Delay Differential Games: Financial Modeling and Machine
Learning Algorithms
- URL: http://arxiv.org/abs/2307.06450v1
- Date: Wed, 12 Jul 2023 21:02:45 GMT
- Title: Stochastic Delay Differential Games: Financial Modeling and Machine
Learning Algorithms
- Authors: Robert Balkin and Hector D. Ceniceros and Ruimeng Hu
- Abstract summary: We propose a numerical methodology for finding the closed-loop Nash equilibrium of delay differential games through deep learning.
These games are prevalent in finance and economics where multi-agent interaction and delayed effects are often desired features in a model.
- Score: 3.222802562733787
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a numerical methodology for finding the closed-loop
Nash equilibrium of stochastic delay differential games through deep learning.
These games are prevalent in finance and economics where multi-agent
interaction and delayed effects are often desired features in a model, but are
introduced at the expense of increased dimensionality of the problem. This
increased dimensionality is especially significant as that arising from the
number of players is coupled with the potential infinite dimensionality caused
by the delay. Our approach involves parameterizing the controls of each player
using distinct recurrent neural networks. These recurrent neural network-based
controls are then trained using a modified version of Brown's fictitious play,
incorporating deep learning techniques. To evaluate the effectiveness of our
methodology, we test it on finance-related problems with known solutions.
Furthermore, we also develop new problems and derive their analytical Nash
equilibrium solutions, which serve as additional benchmarks for assessing the
performance of our proposed deep learning approach.
Related papers
- Deep multitask neural networks for solving some stochastic optimal
control problems [0.0]
In this paper, we consider a class of optimal control problems and introduce an effective solution employing neural networks.
To train our multitask neural network, we introduce a novel scheme that dynamically balances the learning across tasks.
Through numerical experiments on real-world derivatives pricing problems, we prove that our method outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2024-01-23T17:20:48Z) - Recent Developments in Machine Learning Methods for Stochastic Control
and Games [3.3993877661368757]
Recently, computational methods based on machine learning have been developed for solving control problems and games.
We focus on deep learning methods that have unlocked the possibility of solving such problems, even in high dimensions or when the structure is very complex.
This paper provides an introduction to these methods and summarizes the state-of-the-art works at the crossroad of machine learning and control and games.
arXiv Detail & Related papers (2023-03-17T21:53:07Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - On Robust Numerical Solver for ODE via Self-Attention Mechanism [82.95493796476767]
We explore training efficient and robust AI-enhanced numerical solvers with a small data size by mitigating intrinsic noise disturbances.
We first analyze the ability of the self-attention mechanism to regulate noise in supervised learning and then propose a simple-yet-effective numerical solver, Attr, which introduces an additive self-attention mechanism to the numerical solution of differential equations.
arXiv Detail & Related papers (2023-02-05T01:39:21Z) - Learning to Optimize Permutation Flow Shop Scheduling via Graph-based
Imitation Learning [70.65666982566655]
Permutation flow shop scheduling (PFSS) is widely used in manufacturing systems.
We propose to train the model via expert-driven imitation learning, which accelerates convergence more stably and accurately.
Our model's network parameters are reduced to only 37% of theirs, and the solution gap of our model towards the expert solutions decreases from 6.8% to 1.3% on average.
arXiv Detail & Related papers (2022-10-31T09:46:26Z) - Interfacing Finite Elements with Deep Neural Operators for Fast
Multiscale Modeling of Mechanics Problems [4.280301926296439]
In this work, we explore the idea of multiscale modeling with machine learning and employ DeepONet, a neural operator, as an efficient surrogate of the expensive solver.
DeepONet is trained offline using data acquired from the fine solver for learning the underlying and possibly unknown fine-scale dynamics.
We present various benchmarks to assess accuracy and speedup, and in particular we develop a coupling algorithm for a time-dependent problem.
arXiv Detail & Related papers (2022-02-25T20:46:08Z) - FFNB: Forgetting-Free Neural Blocks for Deep Continual Visual Learning [14.924672048447338]
We devise a dynamic network architecture for continual learning based on a novel forgetting-free neural block (FFNB)
Training FFNB features on new tasks is achieved using a novel procedure that constrains the underlying parameters in the null-space of the previous tasks.
arXiv Detail & Related papers (2021-11-22T17:23:34Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.