Energy-Dissipative Evolutionary Deep Operator Neural Networks
- URL: http://arxiv.org/abs/2306.06281v1
- Date: Fri, 9 Jun 2023 22:11:16 GMT
- Title: Energy-Dissipative Evolutionary Deep Operator Neural Networks
- Authors: Jiahao Zhang, Shiheng Zhang, Jie Shen, Guang Lin
- Abstract summary: Energy-Dissipative Evolutionary Deep Operator Neural Network is an operator learning neural network.
It is designed to seed numerical solutions for a class of partial differential equations.
- Score: 12.764072441220172
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Energy-Dissipative Evolutionary Deep Operator Neural Network is an operator
learning neural network. It is designed to seed numerical solutions for a class
of partial differential equations instead of a single partial differential
equation, such as partial differential equations with different parameters or
different initial conditions. The network consists of two sub-networks, the
Branch net and the Trunk net. For an objective operator G, the Branch net
encodes different input functions u at the same number of sensors, and the
Trunk net evaluates the output function at any location. By minimizing the
error between the evaluated output q and the expected output G(u)(y), DeepONet
generates a good approximation of the operator G. In order to preserve
essential physical properties of PDEs, such as the Energy Dissipation Law, we
adopt a scalar auxiliary variable approach to generate the minimization
problem. It introduces a modified energy and enables unconditional energy
dissipation law at the discrete level. By taking the parameter as a function of
time t, this network can predict the accurate solution at any further time with
feeding data only at the initial state. The data needed can be generated by the
initial conditions, which are readily available. In order to validate the
accuracy and efficiency of our neural networks, we provide numerical
simulations of several partial differential equations, including heat
equations, parametric heat equations and Allen-Cahn equations.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - GIT-Net: Generalized Integral Transform for Operator Learning [58.13313857603536]
This article introduces GIT-Net, a deep neural network architecture for approximating Partial Differential Equation (PDE) operators.
GIT-Net harnesses the fact that differential operators commonly used for defining PDEs can often be represented parsimoniously when expressed in specialized functional bases.
Numerical experiments demonstrate that GIT-Net is a competitive neural network operator, exhibiting small test errors and low evaluations across a range of PDE problems.
arXiv Detail & Related papers (2023-12-05T03:03:54Z) - PROSE: Predicting Operators and Symbolic Expressions using Multimodal
Transformers [5.263113622394007]
We develop a new neural network framework for predicting differential equations.
By using a transformer structure and a feature fusion approach, our network can simultaneously embed sets of solution operators for various parametric differential equations.
The network is shown to be able to handle noise in the data and errors in the symbolic representation, including noisy numerical values, model misspecification, and erroneous addition or deletion of terms.
arXiv Detail & Related papers (2023-09-28T19:46:07Z) - PI-VEGAN: Physics Informed Variational Embedding Generative Adversarial
Networks for Stochastic Differential Equations [14.044012646069552]
We present a new category of physics-informed neural networks called physics informed embedding generative adversarial network (PI-VEGAN)
PI-VEGAN effectively tackles forward, inverse, and mixed problems of differential equations.
We evaluate the effectiveness of PI-VEGAN in addressing forward, inverse, and mixed problems that require the concurrent calculation of system parameters and solutions.
arXiv Detail & Related papers (2023-07-21T01:18:02Z) - Spectral-Bias and Kernel-Task Alignment in Physically Informed Neural
Networks [4.604003661048267]
Physically informed neural networks (PINNs) are a promising emerging method for solving differential equations.
We propose a comprehensive theoretical framework that sheds light on this important problem.
We derive an integro-differential equation that governs PINN prediction in the large data-set limit.
arXiv Detail & Related papers (2023-07-12T18:00:02Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - A PINN Approach to Symbolic Differential Operator Discovery with Sparse
Data [0.0]
In this work we perform symbolic discovery of differential operators in a situation where there is sparse experimental data.
We modify the PINN approach by adding a neural network that learns a representation of unknown hidden terms in the differential equation.
The algorithm yields both a surrogate solution to the differential equation and a black-box representation of the hidden terms.
arXiv Detail & Related papers (2022-12-09T02:09:37Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Learning via nonlinear conjugate gradients and depth-varying neural ODEs [5.565364597145568]
The inverse problem of supervised reconstruction of depth-variable parameters in a neural ordinary differential equation (NODE) is considered.
The proposed parameter reconstruction is done for a general first order differential equation by minimizing a cost functional.
The sensitivity problem can estimate changes in the network output under perturbation of the trained parameters.
arXiv Detail & Related papers (2022-02-11T17:00:48Z) - Factorized Fourier Neural Operators [77.47313102926017]
The Factorized Fourier Neural Operator (F-FNO) is a learning-based method for simulating partial differential equations.
We show that our model maintains an error rate of 2% while still running an order of magnitude faster than a numerical solver.
arXiv Detail & Related papers (2021-11-27T03:34:13Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.