Pseudo-differential-enhanced physics-informed neural networks
- URL: http://arxiv.org/abs/2602.14663v1
- Date: Mon, 16 Feb 2026 11:40:58 GMT
- Title: Pseudo-differential-enhanced physics-informed neural networks
- Authors: Andrew Gracyk,
- Abstract summary: Pseudo-differential enhanced physics-informed neural networks (PINNs)<n>We present PINNs, an extension of enhancement but in Fourier space.<n>Our methods oftentimes achieve superior PINN versus numerical error in fewer training.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present pseudo-differential enhanced physics-informed neural networks (PINNs), an extension of gradient enhancement but in Fourier space. Gradient enhancement of PINNs dictates that the PDE residual is taken to a higher differential order than prescribed by the PDE, added to the objective as an augmented term in order to improve training and overall learning fidelity. We propose the same procedure after application via Fourier transforms, since differentiating in Fourier space is multiplication with the Fourier wavenumber under suitable decay. Our methods are fast and efficient. Our methods oftentimes achieve superior PINN versus numerical error in fewer training iterations, potentially pair well with few samples in collocation, and can on occasion break plateaus in low collocation settings. Moreover, our methods are suitable for fractional derivatives. We establish that our methods improve spectral eigenvalue decay of the neural tangent kernel (NTK), and so our methods contribute towards the learning of high frequencies in early training, mitigating the effects of frequency bias up to the polynomial order and possibly greater with smooth activations. Our methods accommodate advanced techniques in PINNs, such as Fourier feature embeddings. A pitfall of discrete Fourier transforms via the Fast Fourier Transform (FFT) is mesh subjugation, and so we demonstrate compatibility of our methods for greater mesh flexibility and invariance on alternative Euclidean and non-Euclidean domains via Monte Carlo methods and otherwise.
Related papers
- Iterative Training of Physics-Informed Neural Networks with Fourier-enhanced Features [7.1865646765394215]
Spectral bias, the tendency of neural networks to learn low-frequency features first, is a well-known issue.<n>We propose IFeF-PINN, an algorithm for iterative training of PINNs with Fourier-enhanced features.
arXiv Detail & Related papers (2025-10-22T09:17:37Z) - Random at First, Fast at Last: NTK-Guided Fourier Pre-Processing for Tabular DL [4.6774351030379835]
We revisit and repurpose random Fourier mappings as a parameter-free, architecture-agnostic transformation.<n>We show that this approach circumvents the need for ad hoc normalization or additional learnable embeddings.<n> Empirically, we demonstrate that deep networks trained on Fourier-transformed inputs converge more rapidly and consistently achieve strong final performance.
arXiv Detail & Related papers (2025-06-03T03:45:13Z) - Robustifying Fourier Features Embeddings for Implicit Neural Representations [25.725097757343367]
Implicit Neural Representations (INRs) employ neural networks to represent continuous functions by mapping coordinates to the corresponding values of the target function.<n>INRs face a challenge known as spectral bias when dealing with scenes containing varying frequencies.<n>We propose the use of multi-layer perceptrons (MLPs) without additive.
arXiv Detail & Related papers (2025-02-08T07:43:37Z) - Solving High Frequency and Multi-Scale PDEs with Gaussian Processes [18.190228010565367]
PINNs often struggle to solve high-frequency and multi-scale PDEs.
We resort to the Gaussian process (GP) framework to solve this problem.
We use Kronecker product properties and multilinear algebra to promote computational efficiency and scalability.
arXiv Detail & Related papers (2023-11-08T05:26:58Z) - FC-PINO: High Precision Physics-Informed Neural Operators via Fourier Continuation [60.706803227003995]
We introduce the FC-PINO (Fourier-Continuation-based PINO) architecture which extends the accuracy and efficiency of PINO to non-periodic and non-smooth PDEs.<n>We demonstrate that standard PINO struggles to solve non-periodic and non-smooth PDEs with high precision, across challenging benchmarks.<n>In contrast, the proposed FC-PINO provides accurate, robust, and scalable solutions, substantially outperforming PINO alternatives.
arXiv Detail & Related papers (2022-11-29T06:37:54Z) - Incremental Spatial and Spectral Learning of Neural Operators for
Solving Large-Scale PDEs [86.35471039808023]
We introduce the Incremental Fourier Neural Operator (iFNO), which progressively increases the number of frequency modes used by the model.
We show that iFNO reduces total training time while maintaining or improving generalization performance across various datasets.
Our method demonstrates a 10% lower testing error, using 20% fewer frequency modes compared to the existing Fourier Neural Operator, while also achieving a 30% faster training.
arXiv Detail & Related papers (2022-11-28T09:57:15Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Sigma-Delta and Distributed Noise-Shaping Quantization Methods for
Random Fourier Features [73.25551965751603]
We prove that our quantized RFFs allow a high accuracy approximation of the underlying kernels.
We show that the quantized RFFs can be further compressed, yielding an excellent trade-off between memory use and accuracy.
We empirically show by testing the performance of our methods on several machine learning tasks that our method compares favorably to other state of the art quantization methods in this context.
arXiv Detail & Related papers (2021-06-04T17:24:47Z) - Learning Frequency Domain Approximation for Binary Neural Networks [68.79904499480025]
We propose to estimate the gradient of sign function in the Fourier frequency domain using the combination of sine functions for training BNNs.
The experiments on several benchmark datasets and neural architectures illustrate that the binary network learned using our method achieves the state-of-the-art accuracy.
arXiv Detail & Related papers (2021-03-01T08:25:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.