Deep Neural Network Solutions for Oscillatory Fredholm Integral
Equations
- URL: http://arxiv.org/abs/2401.07003v1
- Date: Sat, 13 Jan 2024 07:26:47 GMT
- Title: Deep Neural Network Solutions for Oscillatory Fredholm Integral
Equations
- Authors: Jie Jiang and Yuesheng Xu
- Abstract summary: We develop a numerical method for solving the equation with DNNs as an approximate solution.
We then propose a multi-grade deep learning (MGDL) model to overcome the spectral bias issue of neural networks.
- Score: 12.102640617194025
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We studied the use of deep neural networks (DNNs) in the numerical solution
of the oscillatory Fredholm integral equation of the second kind. It is known
that the solution of the equation exhibits certain oscillatory behaviors due to
the oscillation of the kernel. It was pointed out recently that standard DNNs
favour low frequency functions, and as a result, they often produce poor
approximation for functions containing high frequency components. We addressed
this issue in this study. We first developed a numerical method for solving the
equation with DNNs as an approximate solution by designing a numerical
quadrature that tailors to computing oscillatory integrals involving DNNs. We
proved that the error of the DNN approximate solution of the equation is
bounded by the training loss and the quadrature error. We then proposed a
multi-grade deep learning (MGDL) model to overcome the spectral bias issue of
neural networks. Numerical experiments demonstrate that the MGDL model is
effective in extracting multiscale information of the oscillatory solution and
overcoming the spectral bias issue from which a standard DNN model suffers.
Related papers
- Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Semi-analytic PINN methods for singularly perturbed boundary value
problems [0.8594140167290099]
We propose a new semi-analytic physics informed neural network (PINN) to solve singularly perturbed boundary value problems.
The PINN is a scientific machine learning framework that offers a promising perspective for finding numerical solutions to partial differential equations.
arXiv Detail & Related papers (2022-08-19T04:26:40Z) - Physics-Informed Neural Network Method for Parabolic Differential
Equations with Sharply Perturbed Initial Conditions [68.8204255655161]
We develop a physics-informed neural network (PINN) model for parabolic problems with a sharply perturbed initial condition.
Localized large gradients in the ADE solution make the (common in PINN) Latin hypercube sampling of the equation's residual highly inefficient.
We propose criteria for weights in the loss function that produce a more accurate PINN solution than those obtained with the weights selected via other methods.
arXiv Detail & Related papers (2022-08-18T05:00:24Z) - Sparse Deep Neural Network for Nonlinear Partial Differential Equations [3.0069322256338906]
This paper is devoted to a numerical study of adaptive approximation of solutions of nonlinear partial differential equations.
We develop deep neural networks (DNNs) with a sparse regularization with multiple parameters to represent functions having certain singularities.
Numerical examples confirm that solutions generated by the proposed SDNN are sparse and accurate.
arXiv Detail & Related papers (2022-07-27T03:12:16Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Improved Training of Physics-Informed Neural Networks with Model
Ensembles [81.38804205212425]
We propose to expand the solution interval gradually to make the PINN converge to the correct solution.
All ensemble members converge to the same solution in the vicinity of observed data.
We show experimentally that the proposed method can improve the accuracy of the found solution.
arXiv Detail & Related papers (2022-04-11T14:05:34Z) - Legendre Deep Neural Network (LDNN) and its application for
approximation of nonlinear Volterra Fredholm Hammerstein integral equations [1.9649448021628986]
We propose Legendre Deep Neural Network (LDNN) for solving nonlinear Volterra Fredholm Hammerstein equations (VFHIEs)
We show using the Gaussian quadrature collocation method in combination with LDNN results in a novel numerical solution for nonlinear VFHIEs.
arXiv Detail & Related papers (2021-06-27T21:00:09Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - Multi-scale Deep Neural Network (MscaleDNN) for Solving
Poisson-Boltzmann Equation in Complex Domains [12.09637784919702]
We propose multi-scale deep neural networks (MscaleDNNs) using the idea of radial scaling in frequency domain and activation functions with compact support.
As a result, the MscaleDNNs achieve fast uniform convergence over multiple scales.
arXiv Detail & Related papers (2020-07-22T05:28:03Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.