Invertible Fourier Neural Operators for Tackling Both Forward and
Inverse Problems
- URL: http://arxiv.org/abs/2402.11722v1
- Date: Sun, 18 Feb 2024 22:16:43 GMT
- Title: Invertible Fourier Neural Operators for Tackling Both Forward and
Inverse Problems
- Authors: Da Long and Shandian Zhe
- Abstract summary: We propose an invertible Fourier Neural Operator (iFNO) that tackles both the forward and inverse problems.
We integrated a variational auto-encoder to capture the intrinsic structures within the input space and to enable posterior inference.
The evaluations on five benchmark problems have demonstrated the effectiveness of our approach.
- Score: 18.48295539583625
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fourier Neural Operator (FNO) is a popular operator learning method, which
has demonstrated state-of-the-art performance across many tasks. However, FNO
is mainly used in forward prediction, yet a large family of applications rely
on solving inverse problems. In this paper, we propose an invertible Fourier
Neural Operator (iFNO) that tackles both the forward and inverse problems. We
designed a series of invertible Fourier blocks in the latent channel space to
share the model parameters, efficiently exchange the information, and mutually
regularize the learning for the bi-directional tasks. We integrated a
variational auto-encoder to capture the intrinsic structures within the input
space and to enable posterior inference so as to overcome challenges of
illposedness, data shortage, noises, etc. We developed a three-step process for
pre-training and fine tuning for efficient training. The evaluations on five
benchmark problems have demonstrated the effectiveness of our approach.
Related papers
- End-to-End Meta-Bayesian Optimisation with Transformer Neural Processes [52.818579746354665]
This paper proposes the first end-to-end differentiable meta-BO framework that generalises neural processes to learn acquisition functions via transformer architectures.
We enable this end-to-end framework with reinforcement learning (RL) to tackle the lack of labelled acquisition data.
arXiv Detail & Related papers (2023-05-25T10:58:46Z) - Fourier Continuation for Exact Derivative Computation in
Physics-Informed Neural Operators [53.087564562565774]
PINO is a machine learning architecture that has shown promising empirical results for learning partial differential equations.
We present an architecture that leverages Fourier continuation (FC) to apply the exact gradient method to PINO for nonperiodic problems.
arXiv Detail & Related papers (2022-11-29T06:37:54Z) - Incremental Spatial and Spectral Learning of Neural Operators for
Solving Large-Scale PDEs [86.35471039808023]
We introduce the Incremental Fourier Neural Operator (iFNO), which progressively increases the number of frequency modes used by the model.
We show that iFNO reduces total training time while maintaining or improving generalization performance across various datasets.
Our method demonstrates a 10% lower testing error, using 20% fewer frequency modes compared to the existing Fourier Neural Operator, while also achieving a 30% faster training.
arXiv Detail & Related papers (2022-11-28T09:57:15Z) - Improving the Robustness of Neural Multiplication Units with Reversible
Stochasticity [2.4278445972594525]
Multilayer Perceptrons struggle to learn certain simple arithmetic tasks.
Specialist neural NMU (sNMU) is proposed to apply reversibleity, encouraging avoidance of such optima.
arXiv Detail & Related papers (2022-11-10T14:56:37Z) - Factorized Fourier Neural Operators [77.47313102926017]
The Factorized Fourier Neural Operator (F-FNO) is a learning-based method for simulating partial differential equations.
We show that our model maintains an error rate of 2% while still running an order of magnitude faster than a numerical solver.
arXiv Detail & Related papers (2021-11-27T03:34:13Z) - Adaptive Fourier Neural Operators: Efficient Token Mixers for
Transformers [55.90468016961356]
We propose an efficient token mixer that learns to mix in the Fourier domain.
AFNO is based on a principled foundation of operator learning.
It can handle a sequence size of 65k and outperforms other efficient self-attention mechanisms.
arXiv Detail & Related papers (2021-11-24T05:44:31Z) - Choose a Transformer: Fourier or Galerkin [0.0]
We apply the self-attention from the state-of-the-art Transformer in Attention Is All You Need to a data-driven operator learning problem.
We show that softmax normalization in the scaled dot-product attention is sufficient but not necessary, and have proved the approximation capacity of a linear variant as a Petrov-Galerkin projection.
We present three operator learning experiments, including the viscid Burgers' equation, an interface Darcy flow, and an inverse interface coefficient identification problem.
arXiv Detail & Related papers (2021-05-31T14:30:53Z) - Deep Feedback Inverse Problem Solver [141.26041463617963]
We present an efficient, effective, and generic approach towards solving inverse problems.
We leverage the feedback signal provided by the forward process and learn an iterative update model.
Our approach does not have any restrictions on the forward process; it does not require any prior knowledge either.
arXiv Detail & Related papers (2021-01-19T16:49:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.