Variationally Mimetic Operator Networks
- URL: http://arxiv.org/abs/2209.12871v3
- Date: Tue, 29 Aug 2023 19:21:50 GMT
- Title: Variationally Mimetic Operator Networks
- Authors: Dhruv Patel, Deep Ray, Michael R. A. Abdelmalik, Thomas J. R. Hughes,
Assad A. Oberai
- Abstract summary: This work describes a new architecture for operator networks that mimics the form of the numerical solution obtained from an approximate variational or weak formulation of the problem.
The application of these ideas to a generic elliptic PDE leads to a variationally mimetic operator network (VarMiON)
An analysis of the error in the VarMiON solution reveals that it contains contributions from the error in the training data, the training error, the quadrature error in sampling input and output functions, and a "covering error" that measures the distance between the test input functions and the nearest functions in the training dataset.
- Score: 1.7667202894248826
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years operator networks have emerged as promising deep learning
tools for approximating the solution to partial differential equations (PDEs).
These networks map input functions that describe material properties, forcing
functions and boundary data to the solution of a PDE. This work describes a new
architecture for operator networks that mimics the form of the numerical
solution obtained from an approximate variational or weak formulation of the
problem. The application of these ideas to a generic elliptic PDE leads to a
variationally mimetic operator network (VarMiON). Like the conventional Deep
Operator Network (DeepONet) the VarMiON is also composed of a sub-network that
constructs the basis functions for the output and another that constructs the
coefficients for these basis functions. However, in contrast to the DeepONet,
the architecture of these sub-networks in the VarMiON is precisely determined.
An analysis of the error in the VarMiON solution reveals that it contains
contributions from the error in the training data, the training error, the
quadrature error in sampling input and output functions, and a "covering error"
that measures the distance between the test input functions and the nearest
functions in the training dataset. It also depends on the stability constants
for the exact solution operator and its VarMiON approximation. The application
of the VarMiON to a canonical elliptic PDE and a nonlinear PDE reveals that for
approximately the same number of network parameters, on average the VarMiON
incurs smaller errors than a standard DeepONet and a recently proposed
multiple-input operator network (MIONet). Further, its performance is more
robust to variations in input functions, the techniques used to sample the
input and output functions, the techniques used to construct the basis
functions, and the number of input functions.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - DeltaPhi: Learning Physical Trajectory Residual for PDE Solving [54.13671100638092]
We propose and formulate the Physical Trajectory Residual Learning (DeltaPhi)
We learn the surrogate model for the residual operator mapping based on existing neural operator networks.
We conclude that, compared to direct learning, physical residual learning is preferred for PDE solving.
arXiv Detail & Related papers (2024-06-14T07:45:07Z) - Neural Green's Operators for Parametric Partial Differential Equations [0.0]
This work introduces neural Green's operators (NGOs), a novel neural operator network architecture that learns the solution operator for a parametric family of linear partial differential equations (PDEs)
NGOs are similar to deep operator networks (DeepONets) and variationally mimetic operator networks (VarMiONs)
arXiv Detail & Related papers (2024-06-04T00:02:52Z) - D2NO: Efficient Handling of Heterogeneous Input Function Spaces with
Distributed Deep Neural Operators [7.119066725173193]
We propose a novel distributed approach to deal with input functions that exhibit heterogeneous properties.
A central neural network is used to handle shared information across all output functions.
We demonstrate that the corresponding neural network is a universal approximator of continuous nonlinear operators.
arXiv Detail & Related papers (2023-10-29T03:29:59Z) - Energy-Dissipative Evolutionary Deep Operator Neural Networks [12.764072441220172]
Energy-Dissipative Evolutionary Deep Operator Neural Network is an operator learning neural network.
It is designed to seed numerical solutions for a class of partial differential equations.
arXiv Detail & Related papers (2023-06-09T22:11:16Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - The Random Feature Model for Input-Output Maps between Banach Spaces [6.282068591820945]
The random feature model is a parametric approximation to kernel or regression methods.
We propose a methodology for use of the random feature model as a data-driven surrogate for operators that map an input Banach space to an output Banach space.
arXiv Detail & Related papers (2020-05-20T17:41:40Z) - Solving inverse-PDE problems with physics-aware neural networks [0.0]
We propose a novel framework to find unknown fields in the context of inverse problems for partial differential equations.
We blend the high expressibility of deep neural networks as universal function estimators with the accuracy and reliability of existing numerical algorithms.
arXiv Detail & Related papers (2020-01-10T18:46:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.