A Resolution Independent Neural Operator
- URL: http://arxiv.org/abs/2407.13010v1
- Date: Wed, 17 Jul 2024 21:03:21 GMT
- Title: A Resolution Independent Neural Operator
- Authors: Bahador Bahmani, Somdatta Goswami, Ioannis G. Kevrekidis, Michael D. Shields,
- Abstract summary: We introduce a Resolution Independent Neural Operator (RINO) that provides a framework to make DeepONet resolution-independent.
RINO allows DeepONet to handle input functions that are arbitrarily, but sufficiently finely, discretized.
We demonstrate the robustness and applicability of RINO in handling arbitrarily (but sufficiently richly sampled input functions)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Deep operator network (DeepONet) is a powerful yet simple neural operator architecture that utilizes two deep neural networks to learn mappings between infinite-dimensional function spaces. This architecture is highly flexible, allowing the evaluation of the solution field at any location within the desired domain. However, it imposes a strict constraint on the input space, requiring all input functions to be discretized at the same locations; this limits its practical applications. In this work, we introduce a Resolution Independent Neural Operator (RINO) that provides a framework to make DeepONet resolution-independent, enabling it to handle input functions that are arbitrarily, but sufficiently finely, discretized. To this end, we propose a dictionary learning algorithm to adaptively learn a set of appropriate continuous basis functions, parameterized as implicit neural representations (INRs), from the input data. These basis functions are then used to project arbitrary input function data as a point cloud onto an embedding space (i.e., a vector space of finite dimensions) with dimensionality equal to the dictionary size, which can be directly used by DeepONet without any architectural changes. In particular, we utilize sinusoidal representation networks (SIRENs) as our trainable INR basis functions. We demonstrate the robustness and applicability of RINO in handling arbitrarily (but sufficiently richly) sampled input functions during both training and inference through several numerical examples.
Related papers
- Neural Operators with Localized Integral and Differential Kernels [77.76991758980003]
We present a principled approach to operator learning that can capture local features under two frameworks.
We prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs.
To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions.
arXiv Detail & Related papers (2024-02-26T18:59:31Z) - Provable Data Subset Selection For Efficient Neural Network Training [73.34254513162898]
We introduce the first algorithm to construct coresets for emphRBFNNs, i.e., small weighted subsets that approximate the loss of the input data on any radial basis function network.
We then perform empirical evaluations on function approximation and dataset subset selection on popular network architectures and data sets.
arXiv Detail & Related papers (2023-03-09T10:08:34Z) - Convolutional Neural Operators for robust and accurate learning of PDEs [11.562748612983956]
We present novel adaptations for convolutional neural networks to process functions as inputs and outputs.
The resulting architecture is termed as convolutional neural operators (CNOs)
We prove a universality theorem to show that CNOs can approximate operators arising in PDEs to desired accuracy.
arXiv Detail & Related papers (2023-02-02T15:54:45Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - Universal Approximation Property of Fully Convolutional Neural Networks
with Zero Padding [10.295288663157393]
CNNs function as tensor-to-tensor mappings, preserving the spatial structure of input data.
We show that CNNs can approximate arbitrary continuous functions in cases where both the input and output values exhibit the same spatial shape.
We also verify that deep, narrow CNNs possess the UAP as tensor-to-tensor functions.
arXiv Detail & Related papers (2022-11-18T02:04:16Z) - What Can Be Learnt With Wide Convolutional Neural Networks? [69.55323565255631]
We study infinitely-wide deep CNNs in the kernel regime.
We prove that deep CNNs adapt to the spatial scale of the target function.
We conclude by computing the generalisation error of a deep CNN trained on the output of another deep CNN.
arXiv Detail & Related papers (2022-08-01T17:19:32Z) - Variable Bitrate Neural Fields [75.24672452527795]
We present a dictionary method for compressing feature grids, reducing their memory consumption by up to 100x.
We formulate the dictionary optimization as a vector-quantized auto-decoder problem which lets us learn end-to-end discrete neural representations in a space where no direct supervision is available.
arXiv Detail & Related papers (2022-06-15T17:58:34Z) - Enhanced DeepONet for Modeling Partial Differential Operators
Considering Multiple Input Functions [5.819397109258169]
A deep network operator (DeepONet) was proposed to model the general non-linear continuous operators for partial differential equations (PDE)
Existing DeepONet can only accept one input function, which limits its application.
We propose new Enhanced DeepONet or EDeepONet high-level neural network structure, in which two input functions are represented by two branch sub-networks.
Our numerical results on modeling two partial differential equation examples shows that the proposed enhanced DeepONet is about 7X-17X or about one order of magnitude more accurate than the fully connected neural network.
arXiv Detail & Related papers (2022-02-17T23:58:23Z) - Deep Parametric Continuous Convolutional Neural Networks [92.87547731907176]
Parametric Continuous Convolution is a new learnable operator that operates over non-grid structured data.
Our experiments show significant improvement over the state-of-the-art in point cloud segmentation of indoor and outdoor scenes.
arXiv Detail & Related papers (2021-01-17T18:28:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.