Hilbert Neural Operator: Operator Learning in the Analytic Signal Domain
- URL: http://arxiv.org/abs/2508.04882v1
- Date: Wed, 06 Aug 2025 21:12:15 GMT
- Title: Hilbert Neural Operator: Operator Learning in the Analytic Signal Domain
- Authors: Saman Pordanesh, Pejman Shahsavari, Hossein Ghadjari,
- Abstract summary: We introduce the textbfHilbert Neural Operator (HNO), a new neural operator architecture to address some advantages.<n>HNO operates by first mapping the input signal to its analytic representation via the Hilbert transform.<n>We hypothesize that this architecture enables HNO to model operators more effectively for causal, phase-sensitive, and non-stationary systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural operators have emerged as a powerful, data-driven paradigm for learning solution operators of partial differential equations (PDEs). State-of-the-art architectures, such as the Fourier Neural Operator (FNO), have achieved remarkable success by performing convolutions in the frequency domain, making them highly effective for a wide range of problems. However, this method has some limitations, including the periodicity assumption of the Fourier transform. In addition, there are other methods of analysing a signal, beyond phase and amplitude perspective, and provide us with other useful information to learn an effective network. We introduce the \textbf{Hilbert Neural Operator (HNO)}, a new neural operator architecture to address some advantages by incorporating a strong inductive bias from signal processing. HNO operates by first mapping the input signal to its analytic representation via the Hilbert transform, thereby making instantaneous amplitude and phase information explicit features for the learning process. The core learnable operation -- a spectral convolution -- is then applied to this Hilbert-transformed representation. We hypothesize that this architecture enables HNO to model operators more effectively for causal, phase-sensitive, and non-stationary systems. We formalize the HNO architecture and provide the theoretical motivation for its design, rooted in analytic signal theory.
Related papers
- Fourier Neural Operators for Non-Markovian Processes:Approximation Theorems and Experiments [2.84475965465923]
This paper introduces an operator-based neural network, the mirror-padded neural operator (MFNO)<n>MFNO extends the standard Fourier neural operator (FNO) by incorporating mirror padding, enabling it to handle non-periodic inputs.<n>We rigorously prove that MFNOs can approximate solutions of path-dependent differential equations and transformations of fractional Brownian motions to an arbitrary degree of accuracy.
arXiv Detail & Related papers (2025-07-23T19:30:34Z) - DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [60.58067866537143]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.<n>To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.<n> Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Neural Operators with Localized Integral and Differential Kernels [77.76991758980003]
We present a principled approach to operator learning that can capture local features under two frameworks.
We prove that we obtain differential operators under an appropriate scaling of the kernel values of CNNs.
To obtain local integral operators, we utilize suitable basis representations for the kernels based on discrete-continuous convolutions.
arXiv Detail & Related papers (2024-02-26T18:59:31Z) - How important are specialized transforms in Neural Operators? [9.809251473887594]
We investigate the importance of the transform layers to the reported success of transform based neural operators.
Surprisingly, we observe that linear layers suffice to provide performance comparable to the best-known transform-based layers and seem to do so with a compute time advantage as well.
arXiv Detail & Related papers (2023-08-18T04:35:13Z) - Nonlocality and Nonlinearity Implies Universality in Operator Learning [8.83910715280152]
Neural operator architectures approximate operators between infinite-dimensional Banach spaces of functions.
It is clear that any general approximation of operators between spaces of functions must be both nonlocal and nonlinear.
We show how these two attributes may be combined in a simple way to deduce universal approximation.
arXiv Detail & Related papers (2023-04-26T01:03:11Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - Fast Sampling of Diffusion Models via Operator Learning [74.37531458470086]
We use neural operators, an efficient method to solve the probability flow differential equations, to accelerate the sampling process of diffusion models.
Compared to other fast sampling methods that have a sequential nature, we are the first to propose a parallel decoding method.
We show our method achieves state-of-the-art FID of 3.78 for CIFAR-10 and 7.83 for ImageNet-64 in the one-model-evaluation setting.
arXiv Detail & Related papers (2022-11-24T07:30:27Z) - Pseudo-Differential Neural Operator: Generalized Fourier Neural Operator
for Learning Solution Operators of Partial Differential Equations [14.43135909469058]
We propose a novel textitpseudo-differential integral operator (PDIO) to analyze and generalize the Fourier integral operator in FNO.
We experimentally validate the effectiveness of the proposed model by utilizing Darcy flow and the Navier-Stokes equation.
arXiv Detail & Related papers (2022-01-28T07:22:32Z) - Factorized Fourier Neural Operators [77.47313102926017]
The Factorized Fourier Neural Operator (F-FNO) is a learning-based method for simulating partial differential equations.
We show that our model maintains an error rate of 2% while still running an order of magnitude faster than a numerical solver.
arXiv Detail & Related papers (2021-11-27T03:34:13Z) - Incorporating NODE with Pre-trained Neural Differential Operator for
Learning Dynamics [73.77459272878025]
We propose to enhance the supervised signal in learning dynamics by pre-training a neural differential operator (NDO)
NDO is pre-trained on a class of symbolic functions, and it learns the mapping between the trajectory samples of these functions to their derivatives.
We provide theoretical guarantee on that the output of NDO can well approximate the ground truth derivatives by proper tuning the complexity of the library.
arXiv Detail & Related papers (2021-06-08T08:04:47Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.