Fourier Neural Operators for Non-Markovian Processes:Approximation Theorems and Experiments
- URL: http://arxiv.org/abs/2507.17887v1
- Date: Wed, 23 Jul 2025 19:30:34 GMT
- Title: Fourier Neural Operators for Non-Markovian Processes:Approximation Theorems and Experiments
- Authors: Wonjae Lee, Taeyoung Kim, Hyungbin Park,
- Abstract summary: This paper introduces an operator-based neural network, the mirror-padded neural operator (MFNO)<n>MFNO extends the standard Fourier neural operator (FNO) by incorporating mirror padding, enabling it to handle non-periodic inputs.<n>We rigorously prove that MFNOs can approximate solutions of path-dependent differential equations and transformations of fractional Brownian motions to an arbitrary degree of accuracy.
- Score: 2.84475965465923
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces an operator-based neural network, the mirror-padded Fourier neural operator (MFNO), designed to learn the dynamics of stochastic systems. MFNO extends the standard Fourier neural operator (FNO) by incorporating mirror padding, enabling it to handle non-periodic inputs. We rigorously prove that MFNOs can approximate solutions of path-dependent stochastic differential equations and Lipschitz transformations of fractional Brownian motions to an arbitrary degree of accuracy. Our theoretical analysis builds on Wong--Zakai type theorems and various approximation techniques. Empirically, the MFNO exhibits strong resolution generalization--a property rarely seen in standard architectures such as LSTMs, TCNs, and DeepONet. Furthermore, our model achieves performance that is comparable or superior to these baselines while offering significantly faster sample path generation than classical numerical schemes.
Related papers
- Hilbert Neural Operator: Operator Learning in the Analytic Signal Domain [0.0]
We introduce the textbfHilbert Neural Operator (HNO), a new neural operator architecture to address some advantages.<n>HNO operates by first mapping the input signal to its analytic representation via the Hilbert transform.<n>We hypothesize that this architecture enables HNO to model operators more effectively for causal, phase-sensitive, and non-stationary systems.
arXiv Detail & Related papers (2025-08-06T21:12:15Z) - DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [60.58067866537143]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.<n>To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.<n> Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Component Fourier Neural Operator for Singularly Perturbed Differential Equations [3.9482103923304877]
Solving Singularly Perturbed Differential Equations (SPDEs) poses computational challenges arising from the rapid transitions in their solutions within thin regions.
In this manuscript, we introduce Component Fourier Neural Operator (ComFNO), an innovative operator learning method that builds upon Fourier Neural Operator (FNO)
Our approach is not limited to FNO and can be applied to other neural network frameworks, such as Deep Operator Network (DeepONet)
arXiv Detail & Related papers (2024-09-07T09:40:51Z) - The Convex Landscape of Neural Networks: Characterizing Global Optima
and Stationary Points via Lasso Models [75.33431791218302]
Deep Neural Network Network (DNN) models are used for programming purposes.
In this paper we examine the use of convex neural recovery models.
We show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
We also show that all the stationary non-dimensional objective objective can be characterized as the standard a global subsampled convex solvers program.
arXiv Detail & Related papers (2023-12-19T23:04:56Z) - Beyond Regular Grids: Fourier-Based Neural Operators on Arbitrary Domains [13.56018270837999]
We propose a simple method to extend neural operators to arbitrary domains.
An efficient implementation* of such direct spectral evaluations is coupled with existing neural operator models.
We demonstrate that the proposed method allows us to extend neural operators to arbitrary point distributions with significant gains in training speed over baselines.
arXiv Detail & Related papers (2023-05-31T09:01:20Z) - Deep Stochastic Processes via Functional Markov Transition Operators [59.55961312230447]
We introduce a new class of Processes (SPs) constructed by stacking sequences of neural parameterised Markov transition operators in function space.
We prove that these Markov transition operators can preserve the exchangeability and consistency of SPs.
arXiv Detail & Related papers (2023-05-24T21:15:23Z) - Nonlocality and Nonlinearity Implies Universality in Operator Learning [8.83910715280152]
Neural operator architectures approximate operators between infinite-dimensional Banach spaces of functions.
It is clear that any general approximation of operators between spaces of functions must be both nonlocal and nonlinear.
We show how these two attributes may be combined in a simple way to deduce universal approximation.
arXiv Detail & Related papers (2023-04-26T01:03:11Z) - Resolution-Invariant Image Classification based on Fourier Neural
Operators [1.3190581566723918]
We investigate the use of generalization Neural Operators (FNOs) for image classification in comparison to standard Convolutional Neural Networks (CNNs)
We derive the FNO architecture as an example for continuous and Fr'echet-differentiable neural operators on Lebesgue spaces.
arXiv Detail & Related papers (2023-04-02T10:23:36Z) - Bounding The Rademacher Complexity of Fourier Neural Operator [3.4960814625958787]
A Fourier neural operator (FNO) is one of the physics-inspired machine learning methods.
In this study, we investigated the bounding of the Rademacher complexity of the FNO based on specific group norms.
In addition, we investigated the correlation between the empirical generalization error and the proposed capacity of FNO.
arXiv Detail & Related papers (2022-09-12T11:11:43Z) - SymNMF-Net for The Symmetric NMF Problem [62.44067422984995]
We propose a neural network called SymNMF-Net for the Symmetric NMF problem.
We show that the inference of each block corresponds to a single iteration of the optimization.
Empirical results on real-world datasets demonstrate the superiority of our SymNMF-Net.
arXiv Detail & Related papers (2022-05-26T08:17:39Z) - Factorized Fourier Neural Operators [77.47313102926017]
The Factorized Fourier Neural Operator (F-FNO) is a learning-based method for simulating partial differential equations.
We show that our model maintains an error rate of 2% while still running an order of magnitude faster than a numerical solver.
arXiv Detail & Related papers (2021-11-27T03:34:13Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.