A Resolution Independent Neural Operator
- URL: http://arxiv.org/abs/2407.13010v3
- Date: Tue, 10 Dec 2024 05:59:06 GMT
- Title: A Resolution Independent Neural Operator
- Authors: Bahador Bahmani, Somdatta Goswami, Ioannis G. Kevrekidis, Michael D. Shields,
- Abstract summary: We introduce a general framework for operator learning from input-output data with arbitrary sensor locations and counts.
We propose two dictionary learning algorithms that adaptively learn continuous basis functions, parameterized as implicit neural representations.
These basis functions project input function data onto a finite-dimensional embedding space, making it compatible with DeepONet without architectural changes.
- Score: 0.0
- License:
- Abstract: The Deep Operator Network (DeepONet) is a powerful neural operator architecture that uses two neural networks to map between infinite-dimensional function spaces. This architecture allows for the evaluation of the solution field at any location within the domain but requires input functions to be discretized at identical locations, limiting practical applications. We introduce a general framework for operator learning from input-output data with arbitrary sensor locations and counts. This begins by introducing a resolution-independent DeepONet (RI-DeepONet), which handles input functions discretized arbitrarily but sufficiently finely. To achieve this, we propose two dictionary learning algorithms that adaptively learn continuous basis functions, parameterized as implicit neural representations (INRs), from correlated signals on arbitrary point clouds. These basis functions project input function data onto a finite-dimensional embedding space, making it compatible with DeepONet without architectural changes. We specifically use sinusoidal representation networks (SIRENs) as trainable INR basis functions. Similarly, the dictionary learning algorithms identify basis functions for output data, defining a new neural operator architecture: the Resolution Independent Neural Operator (RINO). In RINO, the operator learning task reduces to mapping coefficients of input basis functions to output basis functions. We demonstrate RINO's robustness and applicability in handling arbitrarily sampled input and output functions during both training and inference through several numerical examples.
Related papers
- A Library for Learning Neural Operators [77.16483961863808]
We present NeuralOperator, an open-source Python library for operator learning.
Neural operators generalize neural networks to maps between function spaces instead of finite-dimensional Euclidean spaces.
Built on top of PyTorch, NeuralOperator provides all the tools for training and deploying neural operator models.
arXiv Detail & Related papers (2024-12-13T18:49:37Z) - Learning Partial Differential Equations with Deep Parallel Neural Operator [11.121415128908566]
A novel methodology is to learn an operator as a means of approximating the mapping between outputs.
In practical physical science problems, the numerical solutions of partial differential equations are complex.
We propose a deep parallel operator model (DPNO) for efficiently and accurately solving partial differential equations.
arXiv Detail & Related papers (2024-09-30T06:04:04Z) - Neural Operators with Localized Integral and Differential Kernels [77.76991758980003]
We present a principled approach to operator learning that can capture local features under two frameworks.
We prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs.
To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions.
arXiv Detail & Related papers (2024-02-26T18:59:31Z) - D2NO: Efficient Handling of Heterogeneous Input Function Spaces with
Distributed Deep Neural Operators [7.119066725173193]
We propose a novel distributed approach to deal with input functions that exhibit heterogeneous properties.
A central neural network is used to handle shared information across all output functions.
We demonstrate that the corresponding neural network is a universal approximator of continuous nonlinear operators.
arXiv Detail & Related papers (2023-10-29T03:29:59Z) - Provable Data Subset Selection For Efficient Neural Network Training [73.34254513162898]
We introduce the first algorithm to construct coresets for emphRBFNNs, i.e., small weighted subsets that approximate the loss of the input data on any radial basis function network.
We then perform empirical evaluations on function approximation and dataset subset selection on popular network architectures and data sets.
arXiv Detail & Related papers (2023-03-09T10:08:34Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - A Recursively Recurrent Neural Network (R2N2) Architecture for Learning
Iterative Algorithms [64.3064050603721]
We generalize Runge-Kutta neural network to a recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms.
We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields similar iterations to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta solvers for ordinary differential equations.
arXiv Detail & Related papers (2022-11-22T16:30:33Z) - Enhanced DeepONet for Modeling Partial Differential Operators
Considering Multiple Input Functions [5.819397109258169]
A deep network operator (DeepONet) was proposed to model the general non-linear continuous operators for partial differential equations (PDE)
Existing DeepONet can only accept one input function, which limits its application.
We propose new Enhanced DeepONet or EDeepONet high-level neural network structure, in which two input functions are represented by two branch sub-networks.
Our numerical results on modeling two partial differential equation examples shows that the proposed enhanced DeepONet is about 7X-17X or about one order of magnitude more accurate than the fully connected neural network.
arXiv Detail & Related papers (2022-02-17T23:58:23Z) - Deep Parametric Continuous Convolutional Neural Networks [92.87547731907176]
Parametric Continuous Convolution is a new learnable operator that operates over non-grid structured data.
Our experiments show significant improvement over the state-of-the-art in point cloud segmentation of indoor and outdoor scenes.
arXiv Detail & Related papers (2021-01-17T18:28:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.