Bounding The Rademacher Complexity of Fourier Neural Operator
- URL: http://arxiv.org/abs/2209.05150v1
- Date: Mon, 12 Sep 2022 11:11:43 GMT
- Title: Bounding The Rademacher Complexity of Fourier Neural Operator
- Authors: Taeyoung Kim and Myungjoo Kang
- Abstract summary: A Fourier neural operator (FNO) is one of the physics-inspired machine learning methods.
In this study, we investigated the bounding of the Rademacher complexity of the FNO based on specific group norms.
In addition, we investigated the correlation between the empirical generalization error and the proposed capacity of FNO.
- Score: 3.4960814625958787
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A Fourier neural operator (FNO) is one of the physics-inspired machine
learning methods. In particular, it is a neural operator. In recent times,
several types of neural operators have been developed, e.g., deep operator
networks, GNO, and MWTO. Compared with other models, the FNO is computationally
efficient and can learn nonlinear operators between function spaces independent
of a certain finite basis. In this study, we investigated the bounding of the
Rademacher complexity of the FNO based on specific group norms. Using capacity
based on these norms, we bound the generalization error of the FNO model. In
addition, we investigated the correlation between the empirical generalization
error and the proposed capacity of FNO. Based on this investigation, we gained
insight into the impact of the model architecture on the generalization error
and estimated the amount of information about FNO models stored in various
types of capacities.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Component Fourier Neural Operator for Singularly Perturbed Differential Equations [3.9482103923304877]
Solving Singularly Perturbed Differential Equations (SPDEs) poses computational challenges arising from the rapid transitions in their solutions within thin regions.
In this manuscript, we introduce Component Fourier Neural Operator (ComFNO), an innovative operator learning method that builds upon Fourier Neural Operator (FNO)
Our approach is not limited to FNO and can be applied to other neural network frameworks, such as Deep Operator Network (DeepONet)
arXiv Detail & Related papers (2024-09-07T09:40:51Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Nonlocality and Nonlinearity Implies Universality in Operator Learning [8.83910715280152]
Neural operator architectures approximate operators between infinite-dimensional Banach spaces of functions.
It is clear that any general approximation of operators between spaces of functions must be both nonlocal and nonlinear.
We show how these two attributes may be combined in a simple way to deduce universal approximation.
arXiv Detail & Related papers (2023-04-26T01:03:11Z) - Resolution-Invariant Image Classification based on Fourier Neural
Operators [1.3190581566723918]
We investigate the use of generalization Neural Operators (FNOs) for image classification in comparison to standard Convolutional Neural Networks (CNNs)
We derive the FNO architecture as an example for continuous and Fr'echet-differentiable neural operators on Lebesgue spaces.
arXiv Detail & Related papers (2023-04-02T10:23:36Z) - Factorized Fourier Neural Operators [77.47313102926017]
The Factorized Fourier Neural Operator (F-FNO) is a learning-based method for simulating partial differential equations.
We show that our model maintains an error rate of 2% while still running an order of magnitude faster than a numerical solver.
arXiv Detail & Related papers (2021-11-27T03:34:13Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Neural Operator: Learning Maps Between Function Spaces [75.93843876663128]
We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces.
We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator.
An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations.
arXiv Detail & Related papers (2021-08-19T03:56:49Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.