Macroscopic auxiliary asymptotic preserving neural networks for the
linear radiative transfer equations
- URL: http://arxiv.org/abs/2403.01820v1
- Date: Mon, 4 Mar 2024 08:10:42 GMT
- Title: Macroscopic auxiliary asymptotic preserving neural networks for the
linear radiative transfer equations
- Authors: Hongyan Li, Song Jiang, Wenjun Sun, Liwei Xu, Guanyu Zhou
- Abstract summary: We develop a Macroscopic Auxiliary Asymptotic-Preserving Neural Network (MA-APNN) method to solve the time-dependent linear radiative transfer equations.
We present several numerical examples to demonstrate the effectiveness of MA-APNNs.
- Score: 3.585855304503951
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop a Macroscopic Auxiliary Asymptotic-Preserving Neural Network
(MA-APNN) method to solve the time-dependent linear radiative transfer
equations (LRTEs), which have a multi-scale nature and high dimensionality. To
achieve this, we utilize the Physics-Informed Neural Networks (PINNs) framework
and design a new adaptive exponentially weighted Asymptotic-Preserving (AP)
loss function, which incorporates the macroscopic auxiliary equation that is
derived from the original transfer equation directly and explicitly contains
the information of the diffusion limit equation. Thus, as the scale parameter
tends to zero, the loss function gradually transitions from the transport state
to the diffusion limit state. In addition, the initial data, boundary
conditions, and conservation laws serve as the regularization terms for the
loss. We present several numerical examples to demonstrate the effectiveness of
MA-APNNs.
Related papers
- Coupled Integral PINN for conservation law [1.9720482348156743]
The Physics-Informed Neural Network (PINN) is an innovative approach to solve a diverse array of partial differential equations.
This paper introduces a novel Coupled Integrated PINN methodology that involves fitting the integral solutions equations using additional neural networks.
arXiv Detail & Related papers (2024-11-18T04:32:42Z) - Gradient Flow Based Phase-Field Modeling Using Separable Neural Networks [1.2277343096128712]
We propose a separable neural network-based approximation of the phase field in a minimizing movement scheme to solve a gradient flow problem.
The proposed method outperforms the state-of-the-art machine learning methods for phase separation problems.
arXiv Detail & Related papers (2024-05-09T21:53:27Z) - A Mean-Field Analysis of Neural Stochastic Gradient Descent-Ascent for Functional Minimax Optimization [90.87444114491116]
This paper studies minimax optimization problems defined over infinite-dimensional function classes of overparametricized two-layer neural networks.
We address (i) the convergence of the gradient descent-ascent algorithm and (ii) the representation learning of the neural networks.
Results show that the feature representation induced by the neural networks is allowed to deviate from the initial one by the magnitude of $O(alpha-1)$, measured in terms of the Wasserstein distance.
arXiv Detail & Related papers (2024-04-18T16:46:08Z) - Capturing the Diffusive Behavior of the Multiscale Linear Transport
Equations by Asymptotic-Preserving Convolutional DeepONets [31.88833218777623]
We introduce two types of novel Asymptotic-Preserving Convolutional Deep Operator Networks (APCONs)
We propose a new architecture called Convolutional Deep Operator Networks, which employ multiple local convolution operations instead of a global heat kernel.
Our APCON methods possess a parameter count that is independent of the grid size and are capable of capturing the diffusive behavior of the linear transport problem.
arXiv Detail & Related papers (2023-06-28T03:16:45Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - A model-data asymptotic-preserving neural network method based on
micro-macro decomposition for gray radiative transfer equations [4.220781196806984]
We propose a model-data-preserving neural network(MD-APNN) method to solve the nonlinear gray radiative transfer equations(GRTEs)
arXiv Detail & Related papers (2022-12-11T15:08:09Z) - A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer
Neural Networks [49.870593940818715]
We study the infinite-width limit of a type of three-layer NN model whose first layer is random and fixed.
Our theory accommodates different scaling choices of the model, resulting in two regimes of the MF limit that demonstrate distinctive behaviors.
arXiv Detail & Related papers (2022-10-28T17:26:27Z) - Neuro-symbolic partial differential equation solver [0.0]
We present a strategy for developing mesh-free neuro-symbolic partial differential equation solvers from numerical discretizations found in scientific computing.
This strategy is unique in that it can be used to efficiently train neural network surrogate models for the solution functions and the differential operators.
arXiv Detail & Related papers (2022-10-25T22:56:43Z) - Physics-Informed Neural Network Method for Parabolic Differential
Equations with Sharply Perturbed Initial Conditions [68.8204255655161]
We develop a physics-informed neural network (PINN) model for parabolic problems with a sharply perturbed initial condition.
Localized large gradients in the ADE solution make the (common in PINN) Latin hypercube sampling of the equation's residual highly inefficient.
We propose criteria for weights in the loss function that produce a more accurate PINN solution than those obtained with the weights selected via other methods.
arXiv Detail & Related papers (2022-08-18T05:00:24Z) - Learning Discriminative Shrinkage Deep Networks for Image Deconvolution [122.79108159874426]
We propose an effective non-blind deconvolution approach by learning discriminative shrinkage functions to implicitly model these terms.
Experimental results show that the proposed method performs favorably against the state-of-the-art ones in terms of efficiency and accuracy.
arXiv Detail & Related papers (2021-11-27T12:12:57Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.