CoNO: Complex Neural Operator for Continous Dynamical Physical Systems
- URL: http://arxiv.org/abs/2406.02597v1
- Date: Sat, 1 Jun 2024 14:32:19 GMT
- Title: CoNO: Complex Neural Operator for Continous Dynamical Physical Systems
- Authors: Karn Tiwari, N M Anoop Krishnan, A P Prathosh,
- Abstract summary: We introduce Complex Neural Operator (CoNO) that parameterizes the integral kernel using Fractional Fourier Transform (FrFT)
Empirically, CoNO consistently attains state-of-the-art performance, showcasing an average relative gain of 10.9%.
CoNO also exhibits the ability to learn from small amounts of data -- giving the same performance as the next best model with just 60% of the training data.
- Score: 4.963536645449426
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural operators extend data-driven models to map between infinite-dimensional functional spaces. While these operators perform effectively in either the time or frequency domain, their performance may be limited when applied to non-stationary spatial or temporal signals whose frequency characteristics change with time. Here, we introduce Complex Neural Operator (CoNO) that parameterizes the integral kernel using Fractional Fourier Transform (FrFT), better representing non-stationary signals in a complex-valued domain. Theoretically, we prove the universal approximation capability of CoNO. We perform an extensive empirical evaluation of CoNO on seven challenging partial differential equations (PDEs), including regular grids, structured meshes, and point clouds. Empirically, CoNO consistently attains state-of-the-art performance, showcasing an average relative gain of 10.9%. Further, CoNO exhibits superior performance, outperforming all other models in additional tasks such as zero-shot super-resolution and robustness to noise. CoNO also exhibits the ability to learn from small amounts of data -- giving the same performance as the next best model with just 60% of the training data. Altogether, CoNO presents a robust and superior model for modeling continuous dynamical systems, providing a fillip to scientific machine learning.
Related papers
- Dilated convolution neural operator for multiscale partial differential equations [11.093527996062058]
We propose the Dilated Convolutional Neural Operator (DCNO) for multiscale partial differential equations.
The DCNO architecture effectively captures both high-frequency and low-frequency features while maintaining a low computational cost.
We show that DCNO strikes an optimal balance between accuracy and computational cost and offers a promising solution for multiscale operator learning.
arXiv Detail & Related papers (2024-07-16T08:17:02Z) - Neural Operators with Localized Integral and Differential Kernels [77.76991758980003]
We present a principled approach to operator learning that can capture local features under two frameworks.
We prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs.
To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions.
arXiv Detail & Related papers (2024-02-26T18:59:31Z) - CoNO: Complex Neural Operator for Continuous Dynamical Systems [10.326780211731263]
We introduce a Complex Neural Operator (CoNO) that parameterizes the integral kernel in the complex fractional Fourier domain.
We show that the model effectively captures the underlying partial differential equation with a single complex fractional Fourier transform.
arXiv Detail & Related papers (2023-10-03T14:38:12Z) - Spherical Fourier Neural Operators: Learning Stable Dynamics on the
Sphere [53.63505583883769]
We introduce Spherical FNOs (SFNOs) for learning operators on spherical geometries.
SFNOs have important implications for machine learning-based simulation of climate dynamics.
arXiv Detail & Related papers (2023-06-06T16:27:17Z) - Solving Seismic Wave Equations on Variable Velocity Models with Fourier
Neural Operator [3.2307366446033945]
We propose a new framework paralleled Fourier neural operator (PFNO) for efficiently training the FNO-based solver.
Numerical experiments demonstrate the high accuracy of both FNO and PFNO with complicated velocity models.
PFNO admits higher computational efficiency on large-scale testing datasets, compared with the traditional finite-difference method.
arXiv Detail & Related papers (2022-09-25T22:25:57Z) - Bounding The Rademacher Complexity of Fourier Neural Operator [3.4960814625958787]
A Fourier neural operator (FNO) is one of the physics-inspired machine learning methods.
In this study, we investigated the bounding of the Rademacher complexity of the FNO based on specific group norms.
In addition, we investigated the correlation between the empirical generalization error and the proposed capacity of FNO.
arXiv Detail & Related papers (2022-09-12T11:11:43Z) - Generative Adversarial Neural Operators [59.21759531471597]
We propose the generative adversarial neural operator (GANO), a generative model paradigm for learning probabilities on infinite-dimensional function spaces.
GANO consists of two main components, a generator neural operator and a discriminator neural functional.
We empirically study GANOs in controlled cases where both input and output functions are samples from GRFs and compare its performance to the finite-dimensional counterpart GAN.
arXiv Detail & Related papers (2022-05-06T05:12:22Z) - Factorized Fourier Neural Operators [77.47313102926017]
The Factorized Fourier Neural Operator (F-FNO) is a learning-based method for simulating partial differential equations.
We show that our model maintains an error rate of 2% while still running an order of magnitude faster than a numerical solver.
arXiv Detail & Related papers (2021-11-27T03:34:13Z) - Incorporating NODE with Pre-trained Neural Differential Operator for
Learning Dynamics [73.77459272878025]
We propose to enhance the supervised signal in learning dynamics by pre-training a neural differential operator (NDO)
NDO is pre-trained on a class of symbolic functions, and it learns the mapping between the trajectory samples of these functions to their derivatives.
We provide theoretical guarantee on that the output of NDO can well approximate the ground truth derivatives by proper tuning the complexity of the library.
arXiv Detail & Related papers (2021-06-08T08:04:47Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.