Learning nonlinear integral operators via Recurrent Neural Networks and
its application in solving Integro-Differential Equations
- URL: http://arxiv.org/abs/2310.09434v1
- Date: Fri, 13 Oct 2023 22:57:46 GMT
- Title: Learning nonlinear integral operators via Recurrent Neural Networks and
its application in solving Integro-Differential Equations
- Authors: Hardeep Bassi, Yuanran Zhu, Senwei Liang, Jia Yin, Cian C. Reeves,
Vojtech Vlcek, and Chao Yang
- Abstract summary: We learn and represent nonlinear integral operators that appear in nonlinear integro-differential equations (IDEs)
The LSTM-RNN representation of the nonlinear integral operator allows us to turn a system of nonlinear integro-differential equations into a system of ordinary differential equations.
We show how this methodology can effectively solve the Dyson's equation for quantum many-body systems.
- Score: 4.011446845089061
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose using LSTM-RNNs (Long Short-Term Memory-Recurrent
Neural Networks) to learn and represent nonlinear integral operators that
appear in nonlinear integro-differential equations (IDEs). The LSTM-RNN
representation of the nonlinear integral operator allows us to turn a system of
nonlinear integro-differential equations into a system of ordinary differential
equations for which many efficient solvers are available. Furthermore, because
the use of LSTM-RNN representation of the nonlinear integral operator in an IDE
eliminates the need to perform a numerical integration in each numerical time
evolution step, the overall temporal cost of the LSTM-RNN-based IDE solver can
be reduced to $O(n_T)$ from $O(n_T^2)$ if a $n_T$-step trajectory is to be
computed. We illustrate the efficiency and robustness of this LSTM-RNN-based
numerical IDE solver with a model problem. Additionally, we highlight the
generalizability of the learned integral operator by applying it to IDEs driven
by different external forces. As a practical application, we show how this
methodology can effectively solve the Dyson's equation for quantum many-body
systems.
Related papers
- Unlocking State-Tracking in Linear RNNs Through Negative Eigenvalues [65.41946981594567]
Linear Recurrent Neural Networks (LRNNs) have emerged as efficient alternatives to Transformers in large language modeling.
LRNNs struggle to perform state-tracking which may impair performance in tasks such as code evaluation or tracking a chess game.
Our work enhances the expressivity of modern LRNNs, broadening their applicability without changing the cost of training or inference.
arXiv Detail & Related papers (2024-11-19T14:35:38Z) - DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - PMNN:Physical Model-driven Neural Network for solving time-fractional
differential equations [17.66402435033991]
An innovative Physical Model-driven Neural Network (PMNN) method is proposed to solve time-fractional differential equations.
It effectively combines deep neural networks (DNNs) with approximation of fractional derivatives.
arXiv Detail & Related papers (2023-10-07T12:43:32Z) - A Polynomial Time Quantum Algorithm for Exponentially Large Scale Nonlinear Differential Equations via Hamiltonian Simulation [1.6003521378074745]
We introduce a class of systems of nonlinear ODEs that can be efficiently solved on quantum computers.
Specifically, we employ the Koopman-von Neumann linearization to map the system of nonlinear ODEs to Hamiltonian dynamics.
This allows us to use the optimal Hamiltonian simulation technique for solving the nonlinear ODEs with $O(rm log(N))$ overhead.
arXiv Detail & Related papers (2023-05-01T04:22:56Z) - DOSnet as a Non-Black-Box PDE Solver: When Deep Learning Meets Operator
Splitting [12.655884541938656]
We develop a learning-based PDE solver, which we name Deep Operator-Splitting Network (DOSnet)
DOSnet is constructed from the physical rules and operators governing the underlying dynamics contains learnable parameters.
We train and validate it on several types of operator-decomposable differential equations.
arXiv Detail & Related papers (2022-12-11T18:23:56Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Neural Integral Equations [3.087238735145305]
We introduce a method for learning unknown integral operators from data using an IE solver.
We also present Attentional Neural Integral Equations (ANIE), which replaces the integral with self-attention.
arXiv Detail & Related papers (2022-09-30T02:32:17Z) - Legendre Deep Neural Network (LDNN) and its application for
approximation of nonlinear Volterra Fredholm Hammerstein integral equations [1.9649448021628986]
We propose Legendre Deep Neural Network (LDNN) for solving nonlinear Volterra Fredholm Hammerstein equations (VFHIEs)
We show using the Gaussian quadrature collocation method in combination with LDNN results in a novel numerical solution for nonlinear VFHIEs.
arXiv Detail & Related papers (2021-06-27T21:00:09Z) - Learning to Solve the AC-OPF using Sensitivity-Informed Deep Neural
Networks [52.32646357164739]
We propose a deep neural network (DNN) to solve the solutions of the optimal power flow (ACOPF)
The proposed SIDNN is compatible with a broad range of OPF schemes.
It can be seamlessly integrated in other learning-to-OPF schemes.
arXiv Detail & Related papers (2021-03-27T00:45:23Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.