Operator Learning: Algorithms and Analysis
- URL: http://arxiv.org/abs/2402.15715v1
- Date: Sat, 24 Feb 2024 04:40:27 GMT
- Title: Operator Learning: Algorithms and Analysis
- Authors: Nikola B. Kovachki and Samuel Lanthaler and Andrew M. Stuart
- Abstract summary: Operator learning refers to the application of ideas from machine learning to approximate operators mapping between Banach spaces of functions.
This review focuses on neural operators, built on the success of deep neural networks in the approximation of functions defined on finite dimensional Euclidean spaces.
- Score: 8.305111048568737
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Operator learning refers to the application of ideas from machine learning to
approximate (typically nonlinear) operators mapping between Banach spaces of
functions. Such operators often arise from physical models expressed in terms
of partial differential equations (PDEs). In this context, such approximate
operators hold great potential as efficient surrogate models to complement
traditional numerical methods in many-query tasks. Being data-driven, they also
enable model discovery when a mathematical description in terms of a PDE is not
available. This review focuses primarily on neural operators, built on the
success of deep neural networks in the approximation of functions defined on
finite dimensional Euclidean spaces. Empirically, neural operators have shown
success in a variety of applications, but our theoretical understanding remains
incomplete. This review article summarizes recent progress and the current
state of our theoretical understanding of neural operators, focusing on an
approximation theoretic point of view.
Related papers
- A Mathematical Analysis of Neural Operator Behaviors [0.0]
This paper presents a rigorous framework for analyzing the behaviors of neural operators.
We focus on their stability, convergence, clustering dynamics, universality, and generalization error.
We aim to offer clear and unified guidance in a single setting for the future design of neural operator-based methods.
arXiv Detail & Related papers (2024-10-28T19:38:53Z) - DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - DeltaPhi: Learning Physical Trajectory Residual for PDE Solving [54.13671100638092]
We propose and formulate the Physical Trajectory Residual Learning (DeltaPhi)
We learn the surrogate model for the residual operator mapping based on existing neural operator networks.
We conclude that, compared to direct learning, physical residual learning is preferred for PDE solving.
arXiv Detail & Related papers (2024-06-14T07:45:07Z) - Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce a new framework for approximate Bayesian uncertainty quantification in neural operators.
Our approach can be interpreted as a probabilistic analogue of the concept of currying from functional programming.
We showcase the efficacy of our approach through applications to different types of partial differential equations.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - Neural Operators with Localized Integral and Differential Kernels [77.76991758980003]
We present a principled approach to operator learning that can capture local features under two frameworks.
We prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs.
To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions.
arXiv Detail & Related papers (2024-02-26T18:59:31Z) - Neural Operators for Accelerating Scientific Simulations and Design [85.89660065887956]
An AI framework, known as Neural Operators, presents a principled framework for learning mappings between functions defined on continuous domains.
Neural Operators can augment or even replace existing simulators in many applications, such as computational fluid dynamics, weather forecasting, and material modeling.
arXiv Detail & Related papers (2023-09-27T00:12:07Z) - Representation Equivalent Neural Operators: a Framework for Alias-free
Operator Learning [11.11883703395469]
This research offers a fresh take on neural operators with a framework Representation equivalent Neural Operators (ReNO)
At its core is the concept of operator aliasing, which measures inconsistency between neural operators and their discrete representations.
Our findings detail how aliasing introduces errors when handling different discretizations and grids and loss of crucial continuous structures.
arXiv Detail & Related papers (2023-05-31T14:45:34Z) - Approximate Bayesian Neural Operators: Uncertainty Quantification for
Parametric PDEs [34.179984253109346]
We provide a mathematically detailed Bayesian formulation of the ''shallow'' (linear) version of neural operators.
We then extend this analytic treatment to general deep neural operators using approximate methods from Bayesian deep learning.
As a result, our approach is able to identify cases, and provide structured uncertainty estimates, where the neural operator fails to predict well.
arXiv Detail & Related papers (2022-08-02T16:10:27Z) - Neural Operator: Learning Maps Between Function Spaces [75.93843876663128]
We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces.
We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator.
An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations.
arXiv Detail & Related papers (2021-08-19T03:56:49Z) - Neural Operator: Graph Kernel Network for Partial Differential Equations [57.90284928158383]
This work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators)
We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators.
Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
arXiv Detail & Related papers (2020-03-07T01:56:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.