CoNO: Complex Neural Operator for Continuous Dynamical Systems
- URL: http://arxiv.org/abs/2310.02094v2
- Date: Wed, 4 Oct 2023 06:48:53 GMT
- Title: CoNO: Complex Neural Operator for Continuous Dynamical Systems
- Authors: Karn Tiwari, N M Anoop Krishnan, Prathosh A P
- Abstract summary: We introduce a Complex Neural Operator (CoNO) that parameterizes the integral kernel in the complex fractional Fourier domain.
We show that the model effectively captures the underlying partial differential equation with a single complex fractional Fourier transform.
- Score: 10.326780211731263
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural operators extend data-driven models to map between
infinite-dimensional functional spaces. These models have successfully solved
continuous dynamical systems represented by differential equations, viz weather
forecasting, fluid flow, or solid mechanics. However, the existing operators
still rely on real space, thereby losing rich representations potentially
captured in the complex space by functional transforms. In this paper, we
introduce a Complex Neural Operator (CoNO), that parameterizes the integral
kernel in the complex fractional Fourier domain. Additionally, the model
employing a complex-valued neural network along with aliasing-free activation
functions preserves the complex values and complex algebraic properties,
thereby enabling improved representation, robustness to noise, and
generalization. We show that the model effectively captures the underlying
partial differential equation with a single complex fractional Fourier
transform. We perform an extensive empirical evaluation of CoNO on several
datasets and additional tasks such as zero-shot super-resolution, evaluation of
out-of-distribution data, data efficiency, and robustness to noise. CoNO
exhibits comparable or superior performance to all the state-of-the-art models
in these tasks. Altogether, CoNO presents a robust and superior model for
modeling continuous dynamical systems, providing a fillip to scientific machine
learning.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Dilated convolution neural operator for multiscale partial differential equations [11.093527996062058]
We propose the Dilated Convolutional Neural Operator (DCNO) for multiscale partial differential equations.
The DCNO architecture effectively captures both high-frequency and low-frequency features while maintaining a low computational cost.
We show that DCNO strikes an optimal balance between accuracy and computational cost and offers a promising solution for multiscale operator learning.
arXiv Detail & Related papers (2024-07-16T08:17:02Z) - CoNO: Complex Neural Operator for Continous Dynamical Physical Systems [4.963536645449426]
We introduce Complex Neural Operator (CoNO) that parameterizes the integral kernel using Fractional Fourier Transform (FrFT)
Empirically, CoNO consistently attains state-of-the-art performance, showcasing an average relative gain of 10.9%.
CoNO also exhibits the ability to learn from small amounts of data -- giving the same performance as the next best model with just 60% of the training data.
arXiv Detail & Related papers (2024-06-01T14:32:19Z) - Neural Operators with Localized Integral and Differential Kernels [77.76991758980003]
We present a principled approach to operator learning that can capture local features under two frameworks.
We prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs.
To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions.
arXiv Detail & Related papers (2024-02-26T18:59:31Z) - Generalization Error Guaranteed Auto-Encoder-Based Nonlinear Model
Reduction for Operator Learning [12.124206935054389]
In this paper, we utilize low-dimensional nonlinear structures in model reduction by investigating Auto-Encoder-based Neural Network (AENet)
Our numerical experiments validate the ability of AENet to accurately learn the solution operator of nonlinear partial differential equations.
Our theoretical framework shows that the sample complexity of training AENet is intricately tied to the intrinsic dimension of the modeled process.
arXiv Detail & Related papers (2024-01-19T05:01:43Z) - Peridynamic Neural Operators: A Data-Driven Nonlocal Constitutive Model
for Complex Material Responses [12.454290779121383]
We introduce a novel integral neural operator architecture called the Peridynamic Neural Operator (PNO) that learns a nonlocal law from data.
This neural operator provides a forward model in the form of state-based peridynamics, with objectivity and momentum balance laws automatically guaranteed.
We show that, owing to its ability to capture complex responses, our learned neural operator achieves improved accuracy and efficiency compared to baseline models.
arXiv Detail & Related papers (2024-01-11T17:37:20Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Neural Integral Equations [3.087238735145305]
We introduce a method for learning unknown integral operators from data using an IE solver.
We also present Attentional Neural Integral Equations (ANIE), which replaces the integral with self-attention.
arXiv Detail & Related papers (2022-09-30T02:32:17Z) - Closed-form Continuous-Depth Models [99.40335716948101]
Continuous-depth neural models rely on advanced numerical differential equation solvers.
We present a new family of models, termed Closed-form Continuous-depth (CfC) networks, that are simple to describe and at least one order of magnitude faster.
arXiv Detail & Related papers (2021-06-25T22:08:51Z) - On Function Approximation in Reinforcement Learning: Optimism in the
Face of Large State Spaces [208.67848059021915]
We study the exploration-exploitation tradeoff at the core of reinforcement learning.
In particular, we prove that the complexity of the function class $mathcalF$ characterizes the complexity of the function.
Our regret bounds are independent of the number of episodes.
arXiv Detail & Related papers (2020-11-09T18:32:22Z) - UNIPoint: Universally Approximating Point Processes Intensities [125.08205865536577]
We provide a proof that a class of learnable functions can universally approximate any valid intensity function.
We implement UNIPoint, a novel neural point process model, using recurrent neural networks to parameterise sums of basis function upon each event.
arXiv Detail & Related papers (2020-07-28T09:31:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.