A Resolution Independent Neural Operator
- URL: http://arxiv.org/abs/2407.13010v2
- Date: Mon, 23 Sep 2024 03:16:26 GMT
- Title: A Resolution Independent Neural Operator
- Authors: Bahador Bahmani, Somdatta Goswami, Ioannis G. Kevrekidis, Michael D. Shields,
- Abstract summary: We introduce RINO, which provides a framework to make DeepONet resolution-independent.
RINO allows DeepONet to handle input functions that are arbitrarily, but sufficiently finely, discretized.
We demonstrate the robustness and applicability of RINO in handling arbitrarily (but sufficiently richly) sampled input and output functions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Deep operator network (DeepONet) is a powerful yet simple neural operator architecture that utilizes two deep neural networks to learn mappings between infinite-dimensional function spaces. This architecture is highly flexible, allowing the evaluation of the solution field at any location within the desired domain. However, it imposes a strict constraint on the input space, requiring all input functions to be discretized at the same locations; this limits its practical applications. In this work, we introduce RINO, which provides a framework to make DeepONet resolution-independent, enabling it to handle input functions that are arbitrarily, but sufficiently finely, discretized. To this end, we propose two dictionary learning algorithms to adaptively learn a set of appropriate continuous basis functions, parameterized as implicit neural representations (INRs), from correlated signals defined on arbitrary point cloud data. These basis functions are then used to project arbitrary input function data as a point cloud onto an embedding space (i.e., a vector space of finite dimensions) with dimensionality equal to the dictionary size, which DeepONet can directly use without any architectural changes. In particular, we utilize sinusoidal representation networks (SIRENs) as trainable INR basis functions. The introduced dictionary learning algorithms can be used in a similar way to learn an appropriate dictionary of basis functions for the output function data. This approach can be seen as an extension of POD DeepONet for cases where the realizations of the output functions have different discretizations, making the Proper Orthogonal Decomposition (POD) approach inapplicable. We demonstrate the robustness and applicability of RINO in handling arbitrarily (but sufficiently richly) sampled input and output functions during both training and inference through several numerical examples.
Related papers
- A Library for Learning Neural Operators [77.16483961863808]
We present NeuralOperator, an open-source Python library for operator learning.
Neural operators generalize neural networks to maps between function spaces instead of finite-dimensional Euclidean spaces.
Built on top of PyTorch, NeuralOperator provides all the tools for training and deploying neural operator models.
arXiv Detail & Related papers (2024-12-13T18:49:37Z) - Learning Partial Differential Equations with Deep Parallel Neural Operator [11.121415128908566]
A novel methodology is to learn an operator as a means of approximating the mapping between outputs.
In practical physical science problems, the numerical solutions of partial differential equations are complex.
We propose a deep parallel operator model (DPNO) for efficiently and accurately solving partial differential equations.
arXiv Detail & Related papers (2024-09-30T06:04:04Z) - Neural Operators with Localized Integral and Differential Kernels [77.76991758980003]
We present a principled approach to operator learning that can capture local features under two frameworks.
We prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs.
To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions.
arXiv Detail & Related papers (2024-02-26T18:59:31Z) - D2NO: Efficient Handling of Heterogeneous Input Function Spaces with
Distributed Deep Neural Operators [7.119066725173193]
We propose a novel distributed approach to deal with input functions that exhibit heterogeneous properties.
A central neural network is used to handle shared information across all output functions.
We demonstrate that the corresponding neural network is a universal approximator of continuous nonlinear operators.
arXiv Detail & Related papers (2023-10-29T03:29:59Z) - Provable Data Subset Selection For Efficient Neural Network Training [73.34254513162898]
We introduce the first algorithm to construct coresets for emphRBFNNs, i.e., small weighted subsets that approximate the loss of the input data on any radial basis function network.
We then perform empirical evaluations on function approximation and dataset subset selection on popular network architectures and data sets.
arXiv Detail & Related papers (2023-03-09T10:08:34Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Enhanced DeepONet for Modeling Partial Differential Operators
Considering Multiple Input Functions [5.819397109258169]
A deep network operator (DeepONet) was proposed to model the general non-linear continuous operators for partial differential equations (PDE)
Existing DeepONet can only accept one input function, which limits its application.
We propose new Enhanced DeepONet or EDeepONet high-level neural network structure, in which two input functions are represented by two branch sub-networks.
Our numerical results on modeling two partial differential equation examples shows that the proposed enhanced DeepONet is about 7X-17X or about one order of magnitude more accurate than the fully connected neural network.
arXiv Detail & Related papers (2022-02-17T23:58:23Z) - Deep Parametric Continuous Convolutional Neural Networks [92.87547731907176]
Parametric Continuous Convolution is a new learnable operator that operates over non-grid structured data.
Our experiments show significant improvement over the state-of-the-art in point cloud segmentation of indoor and outdoor scenes.
arXiv Detail & Related papers (2021-01-17T18:28:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.