Fourier Continuation for Exact Derivative Computation in
Physics-Informed Neural Operators
- URL: http://arxiv.org/abs/2211.15960v1
- Date: Tue, 29 Nov 2022 06:37:54 GMT
- Title: Fourier Continuation for Exact Derivative Computation in
Physics-Informed Neural Operators
- Authors: Haydn Maust, Zongyi Li, Yixuan Wang, Daniel Leibovici, Oscar Bruno,
Thomas Hou, Anima Anandkumar
- Abstract summary: PINO is a machine learning architecture that has shown promising empirical results for learning partial differential equations.
We present an architecture that leverages Fourier continuation (FC) to apply the exact gradient method to PINO for nonperiodic problems.
- Score: 53.087564562565774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The physics-informed neural operator (PINO) is a machine learning
architecture that has shown promising empirical results for learning partial
differential equations. PINO uses the Fourier neural operator (FNO)
architecture to overcome the optimization challenges often faced by
physics-informed neural networks. Since the convolution operator in PINO uses
the Fourier series representation, its gradient can be computed exactly on the
Fourier space. While Fourier series cannot represent nonperiodic functions,
PINO and FNO still have the expressivity to learn nonperiodic problems with
Fourier extension via padding. However, computing the Fourier extension in the
physics-informed optimization requires solving an ill-conditioned system,
resulting in inaccurate derivatives which prevent effective optimization. In
this work, we present an architecture that leverages Fourier continuation (FC)
to apply the exact gradient method to PINO for nonperiodic problems. This paper
investigates three different ways that FC can be incorporated into PINO by
testing their performance on a 1D blowup problem. Experiments show that FC-PINO
outperforms padded PINO, improving equation loss by several orders of
magnitude, and it can accurately capture the third order derivatives of
nonsmooth solution functions.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Invertible Fourier Neural Operators for Tackling Both Forward and
Inverse Problems [18.48295539583625]
We propose an invertible Fourier Neural Operator (iFNO) that tackles both the forward and inverse problems.
We integrated a variational auto-encoder to capture the intrinsic structures within the input space and to enable posterior inference.
The evaluations on five benchmark problems have demonstrated the effectiveness of our approach.
arXiv Detail & Related papers (2024-02-18T22:16:43Z) - Fourier-MIONet: Fourier-enhanced multiple-input neural operators for multiphase modeling of geological carbon sequestration [3.3058870667947646]
Multiphase flow in porous media is essential to understand CO$$ migration and pressure fields in the subsurface associated with GCS.
Here, we develop a Fourier-enhanced multiple-input neural operator (Fourier-MIONet) to learn the solution operator of the problem of multiphase flow in porous media.
Compared to the enhanced FNO (U-FNO), the proposed Fourier-MIONet has 90% fewer unknown parameters, and it can be trained in significantly less time.
arXiv Detail & Related papers (2023-03-08T18:20:56Z) - Incremental Spatial and Spectral Learning of Neural Operators for
Solving Large-Scale PDEs [86.35471039808023]
We introduce the Incremental Fourier Neural Operator (iFNO), which progressively increases the number of frequency modes used by the model.
We show that iFNO reduces total training time while maintaining or improving generalization performance across various datasets.
Our method demonstrates a 10% lower testing error, using 20% fewer frequency modes compared to the existing Fourier Neural Operator, while also achieving a 30% faster training.
arXiv Detail & Related papers (2022-11-28T09:57:15Z) - Factorized Fourier Neural Operators [77.47313102926017]
The Factorized Fourier Neural Operator (F-FNO) is a learning-based method for simulating partial differential equations.
We show that our model maintains an error rate of 2% while still running an order of magnitude faster than a numerical solver.
arXiv Detail & Related papers (2021-11-27T03:34:13Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Learning in Sinusoidal Spaces with Physics-Informed Neural Networks [22.47355575565345]
A physics-informed neural network (PINN) uses physics-augmented loss functions to ensure its output is consistent with fundamental physics laws.
It turns out to be difficult to train an accurate PINN model for many problems in practice.
arXiv Detail & Related papers (2021-09-20T07:42:41Z) - Learning Set Functions that are Sparse in Non-Orthogonal Fourier Bases [73.53227696624306]
We present a new family of algorithms for learning Fourier-sparse set functions.
In contrast to other work that focused on the Walsh-Hadamard transform, our novel algorithms operate with recently introduced non-orthogonal Fourier transforms.
We demonstrate effectiveness on several real-world applications.
arXiv Detail & Related papers (2020-10-01T14:31:59Z) - Fourier Neural Networks as Function Approximators and Differential
Equation Solvers [0.456877715768796]
The choice of activation and loss function yields results that replicate a Fourier series expansion closely.
We validate this FNN on naturally periodic smooth functions and on piecewise continuous periodic functions.
The main advantages of the current approach are the validity of the solution outside the training region, interpretability of the trained model, and simplicity of use.
arXiv Detail & Related papers (2020-05-27T00:30:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.