A model-data asymptotic-preserving neural network method based on
micro-macro decomposition for gray radiative transfer equations
- URL: http://arxiv.org/abs/2212.05523v1
- Date: Sun, 11 Dec 2022 15:08:09 GMT
- Title: A model-data asymptotic-preserving neural network method based on
micro-macro decomposition for gray radiative transfer equations
- Authors: Hongyan Li, Song Jiang, Wenjun Sun, Liwei Xu, Guanyu Zhou
- Abstract summary: We propose a model-data-preserving neural network(MD-APNN) method to solve the nonlinear gray radiative transfer equations(GRTEs)
- Score: 4.220781196806984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a model-data asymptotic-preserving neural network(MD-APNN) method
to solve the nonlinear gray radiative transfer equations(GRTEs). The system is
challenging to be simulated with both the traditional numerical schemes and the
vanilla physics-informed neural networks(PINNs) due to the multiscale
characteristics. Under the framework of PINNs, we employ a micro-macro
decomposition technique to construct a new asymptotic-preserving(AP) loss
function, which includes the residual of the governing equations in the
micro-macro coupled form, the initial and boundary conditions with additional
diffusion limit information, the conservation laws, and a few labeled data. A
convergence analysis is performed for the proposed method, and a number of
numerical examples are presented to illustrate the efficiency of MD-APNNs, and
particularly, the importance of the AP property in the neural networks for the
diffusion dominating problems. The numerical results indicate that MD-APNNs
lead to a better performance than APNNs or pure data-driven networks in the
simulation of the nonlinear non-stationary GRTEs.
Related papers
- Chebyshev Spectral Neural Networks for Solving Partial Differential Equations [0.0]
The study uses a feedforward neural network model and error backpropagation principles, utilizing automatic differentiation (AD) to compute the loss function.
The numerical efficiency and accuracy of the CSNN model are investigated through testing on elliptic partial differential equations, and it is compared with the well-known Physics-Informed Neural Network(PINN) method.
arXiv Detail & Related papers (2024-06-06T05:31:45Z) - Macroscopic auxiliary asymptotic preserving neural networks for the
linear radiative transfer equations [3.585855304503951]
We develop a Macroscopic Auxiliary Asymptotic-Preserving Neural Network (MA-APNN) method to solve the time-dependent linear radiative transfer equations.
We present several numerical examples to demonstrate the effectiveness of MA-APNNs.
arXiv Detail & Related papers (2024-03-04T08:10:42Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - RBF-MGN:Solving spatiotemporal PDEs with Physics-informed Graph Neural
Network [4.425915683879297]
We propose a novel framework based on graph neural networks (GNNs) and radial basis function finite difference (RBF-FD)
RBF-FD is used to construct a high-precision difference format of the differential equations to guide model training.
We illustrate the generalizability, accuracy, and efficiency of the proposed algorithms on different PDE parameters.
arXiv Detail & Related papers (2022-12-06T10:08:02Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Parameter Estimation with Dense and Convolutional Neural Networks
Applied to the FitzHugh-Nagumo ODE [0.0]
We present deep neural networks using dense and convolutional layers to solve an inverse problem, where we seek to estimate parameters of a Fitz-Nagumo model.
We demonstrate that deep neural networks have the potential to estimate parameters in dynamical models and processes, and they are capable of predicting parameters accurately for the framework.
arXiv Detail & Related papers (2020-12-12T01:20:42Z) - Estimation of the Mean Function of Functional Data via Deep Neural
Networks [6.230751621285321]
We propose a deep neural network method to perform nonparametric regression for functional data.
The proposed method is applied to analyze positron emission tomography images of patients with Alzheimer disease.
arXiv Detail & Related papers (2020-12-08T17:18:16Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.