Representation Equivalent Neural Operators: a Framework for Alias-free
Operator Learning
- URL: http://arxiv.org/abs/2305.19913v2
- Date: Thu, 2 Nov 2023 14:32:26 GMT
- Title: Representation Equivalent Neural Operators: a Framework for Alias-free
Operator Learning
- Authors: Francesca Bartolucci and Emmanuel de B\'ezenac and Bogdan Raoni\'c and
Roberto Molinaro and Siddhartha Mishra and Rima Alaifari
- Abstract summary: This research offers a fresh take on neural operators with a framework Representation equivalent Neural Operators (ReNO)
At its core is the concept of operator aliasing, which measures inconsistency between neural operators and their discrete representations.
Our findings detail how aliasing introduces errors when handling different discretizations and grids and loss of crucial continuous structures.
- Score: 11.11883703395469
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, operator learning, or learning mappings between
infinite-dimensional function spaces, has garnered significant attention,
notably in relation to learning partial differential equations from data.
Conceptually clear when outlined on paper, neural operators necessitate
discretization in the transition to computer implementations. This step can
compromise their integrity, often causing them to deviate from the underlying
operators. This research offers a fresh take on neural operators with a
framework Representation equivalent Neural Operators (ReNO) designed to address
these issues. At its core is the concept of operator aliasing, which measures
inconsistency between neural operators and their discrete representations. We
explore this for widely-used operator learning techniques. Our findings detail
how aliasing introduces errors when handling different discretizations and
grids and loss of crucial continuous structures. More generally, this framework
not only sheds light on existing challenges but, given its constructive and
broad nature, also potentially offers tools for developing new neural
operators.
Related papers
- Operator Learning of Lipschitz Operators: An Information-Theoretic Perspective [2.375038919274297]
This work addresses the complexity of neural operator approximations for the general class of Lipschitz continuous operators.
Our main contribution establishes lower bounds on the metric entropy of Lipschitz operators in two approximation settings.
It is shown that, regardless of the activation function used, neural operator architectures attaining an approximation accuracy $epsilon$ must have a size that is exponentially large in $epsilon-1$.
arXiv Detail & Related papers (2024-06-26T23:36:46Z) - Neural Operators with Localized Integral and Differential Kernels [77.76991758980003]
We present a principled approach to operator learning that can capture local features under two frameworks.
We prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs.
To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions.
arXiv Detail & Related papers (2024-02-26T18:59:31Z) - Operator Learning: Algorithms and Analysis [8.305111048568737]
Operator learning refers to the application of ideas from machine learning to approximate operators mapping between Banach spaces of functions.
This review focuses on neural operators, built on the success of deep neural networks in the approximation of functions defined on finite dimensional Euclidean spaces.
arXiv Detail & Related papers (2024-02-24T04:40:27Z) - The Parametric Complexity of Operator Learning [6.800286371280922]
This paper aims to prove that for general classes of operators which are characterized only by their $Cr$- or Lipschitz-regularity, operator learning suffers from a curse of parametric complexity''
The second contribution of the paper is to prove that this general curse can be overcome for solution operators defined by the Hamilton-Jacobi equation.
A novel neural operator architecture is introduced, termed HJ-Net, which explicitly takes into account characteristic information of the underlying Hamiltonian system.
arXiv Detail & Related papers (2023-06-28T05:02:03Z) - In-Context Operator Learning with Data Prompts for Differential Equation
Problems [12.61842281581773]
This paper introduces a new neural-network-based approach, namely In-Context Operator Networks (ICON)
ICON simultaneously learn operators from the prompted data and apply it to new questions during the inference stage, without any weight update.
Our numerical results show the neural network's capability as a few-shot operator learner for a diversified type of differential equation problems.
arXiv Detail & Related papers (2023-04-17T05:22:26Z) - Invariant Causal Mechanisms through Distribution Matching [86.07327840293894]
In this work we provide a causal perspective and a new algorithm for learning invariant representations.
Empirically we show that this algorithm works well on a diverse set of tasks and in particular we observe state-of-the-art performance on domain generalization.
arXiv Detail & Related papers (2022-06-23T12:06:54Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - Neural Operator: Learning Maps Between Function Spaces [75.93843876663128]
We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces.
We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator.
An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations.
arXiv Detail & Related papers (2021-08-19T03:56:49Z) - Self-Organized Operational Neural Networks with Generative Neurons [87.32169414230822]
ONNs are heterogenous networks with a generalized neuron model that can encapsulate any set of non-linear operators.
We propose Self-organized ONNs (Self-ONNs) with generative neurons that have the ability to adapt (optimize) the nodal operator of each connection.
arXiv Detail & Related papers (2020-04-24T14:37:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.