FBSJNN: A Theoretically Interpretable and Efficiently Deep Learning method for Solving Partial Integro-Differential Equations
- URL: http://arxiv.org/abs/2412.11010v1
- Date: Sun, 15 Dec 2024 01:37:48 GMT
- Title: FBSJNN: A Theoretically Interpretable and Efficiently Deep Learning method for Solving Partial Integro-Differential Equations
- Authors: Zaijun Ye, Wansheng Wang,
- Abstract summary: We propose a novel framework for solving a class of Partial Integro-Differential Equations (PIDEs) through a deep learning-based approach.
This method, termed the Forward-Backward Jump Neural Network (FBNN), is both theoretically interpretable and numerically effective.
Numerical experiments indicate that the FBSJNN scheme can obtain numerical solutions with a relative error on the scale of $10-3$.
- Score: 0.0
- License:
- Abstract: We propose a novel framework for solving a class of Partial Integro-Differential Equations (PIDEs) and Forward-Backward Stochastic Differential Equations with Jumps (FBSDEJs) through a deep learning-based approach. This method, termed the Forward-Backward Stochastic Jump Neural Network (FBSJNN), is both theoretically interpretable and numerically effective. Theoretical analysis establishes the convergence of the numerical scheme and provides error estimates grounded in the universal approximation properties of neural networks. In comparison to existing methods, the key innovation of the FBSJNN framework is that it uses a single neural network to approximate both the solution of the PIDEs and the non-local integral, leveraging Taylor expansion for the latter. This enables the method to reduce the total number of parameters in FBSJNN, which enhances optimization efficiency. Numerical experiments indicate that the FBSJNN scheme can obtain numerical solutions with a relative error on the scale of $10^{-3}$.
Related papers
- The Finite Element Neural Network Method: One Dimensional Study [0.0]
This research introduces the finite element neural network method (FENNM) within the framework of the Petrov-Galerkin method.
FENNM uses convolution operations to approximate the weighted residual of the differential equations.
This enables the integration of forcing terms and natural boundary conditions into the loss function similar to conventional finite element method (FEM) solvers.
arXiv Detail & Related papers (2025-01-21T21:39:56Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Enriched Physics-informed Neural Networks for Dynamic
Poisson-Nernst-Planck Systems [0.8192907805418583]
This paper proposes a meshless deep learning algorithm, enriched physics-informed neural networks (EPINNs) to solve dynamic Poisson-Nernst-Planck (PNP) equations.
The EPINNs takes the traditional physics-informed neural networks as the foundation framework, and adds the adaptive loss weight to balance the loss functions.
Numerical results indicate that the new method has better applicability than traditional numerical methods in solving such coupled nonlinear systems.
arXiv Detail & Related papers (2024-02-01T02:57:07Z) - HNS: An Efficient Hermite Neural Solver for Solving Time-Fractional
Partial Differential Equations [12.520882780496738]
We present the high-precision Hermite Neural Solver (HNS) for solving time-fractional partial differential equations.
The experimental results show that HNS has significantly improved accuracy and flexibility compared to existing L1-based methods.
arXiv Detail & Related papers (2023-10-07T12:44:47Z) - An Optimization-based Deep Equilibrium Model for Hyperspectral Image
Deconvolution with Convergence Guarantees [71.57324258813675]
We propose a novel methodology for addressing the hyperspectral image deconvolution problem.
A new optimization problem is formulated, leveraging a learnable regularizer in the form of a neural network.
The derived iterative solver is then expressed as a fixed-point calculation problem within the Deep Equilibrium framework.
arXiv Detail & Related papers (2023-06-10T08:25:16Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - D4FT: A Deep Learning Approach to Kohn-Sham Density Functional Theory [79.50644650795012]
We propose a deep learning approach to solve Kohn-Sham Density Functional Theory (KS-DFT)
We prove that such an approach has the same expressivity as the SCF method, yet reduces the computational complexity.
In addition, we show that our approach enables us to explore more complex neural-based wave functions.
arXiv Detail & Related papers (2023-03-01T10:38:10Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Conjugate Gradient Method for Generative Adversarial Networks [0.0]
It is not feasible to calculate the Jensen-Shannon divergence of the density function of the data and the density function of the model of deep neural networks.
Generative adversarial networks (GANs) can be used to formulate this problem as a discriminative problem with two models, a generator and a discriminator.
We propose to apply the conjugate gradient method to solve the local Nash equilibrium problem in GANs.
arXiv Detail & Related papers (2022-03-28T04:44:45Z) - Efficiently Solving High-Order and Nonlinear ODEs with Rational Fraction
Polynomial: the Ratio Net [3.155317790896023]
This study takes a different approach by introducing neural network architecture for constructing trial functions, known as ratio net.
Through empirical trials, it demonstrated that the proposed method exhibits higher efficiency compared to existing approaches.
The ratio net holds promise for advancing the efficiency and effectiveness of solving differential equations.
arXiv Detail & Related papers (2021-05-18T16:59:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.