Derivative-Informed Fourier Neural Operator: Universal Approximation and Applications to PDE-Constrained Optimization
- URL: http://arxiv.org/abs/2512.14086v1
- Date: Tue, 16 Dec 2025 04:54:24 GMT
- Title: Derivative-Informed Fourier Neural Operator: Universal Approximation and Applications to PDE-Constrained Optimization
- Authors: Boyuan Yao, Dingcheng Luo, Lianghao Cao, Nikola Kovachki, Thomas O'Leary-Roseberry, Omar Ghattas,
- Abstract summary: We present approximation theories and efficient training methods for derivative-informed Fourier neural operators (DIFNOs)<n>A DIFNO is trained by minimizing its prediction error jointly on output and Fréchet derivative samples of a high-fidelity operator.<n>We show that DIFNOs are superior in sample complexity for operator learning and solving infinite-dimensional PDE-constrained inverse problems.
- Score: 0.9236074230806578
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present approximation theories and efficient training methods for derivative-informed Fourier neural operators (DIFNOs) with applications to PDE-constrained optimization. A DIFNO is an FNO trained by minimizing its prediction error jointly on output and Fréchet derivative samples of a high-fidelity operator (e.g., a parametric PDE solution operator). As a result, a DIFNO can closely emulate not only the high-fidelity operator's response but also its sensitivities. To motivate the use of DIFNOs instead of conventional FNOs as surrogate models, we show that accurate surrogate-driven PDE-constrained optimization requires accurate surrogate Fréchet derivatives. Then, for continuously differentiable operators, we establish (i) simultaneous universal approximation of FNOs and their Fréchet derivatives on compact sets, and (ii) universal approximation of FNOs in weighted Sobolev spaces with input measures that have unbounded supports. Our theoretical results certify the capability of FNOs for accurate derivative-informed operator learning and accurate solution of PDE-constrained optimization. Furthermore, we develop efficient training schemes using dimension reduction and multi-resolution techniques that significantly reduce memory and computational costs for Fréchet derivative learning. Numerical examples on nonlinear diffusion--reaction, Helmholtz, and Navier--Stokes equations demonstrate that DIFNOs are superior in sample complexity for operator learning and solving infinite-dimensional PDE-constrained inverse problems, achieving high accuracy at low training sample sizes.
Related papers
- Variational Entropic Optimal Transport [67.76725267984578]
We propose Variational Entropic Optimal Transport (VarEOT) for domain translation problems.<n>VarEOT is based on an exact variational reformulation of the log-partition $log mathbbE[exp(cdot)$ as a tractable generalization over an auxiliary positive normalizer.<n> Experiments on synthetic data and unpaired image-to-image translation demonstrate competitive or improved translation quality.
arXiv Detail & Related papers (2026-02-02T15:48:44Z) - Flow-matching Operators for Residual-Augmented Probabilistic Learning of Partial Differential Equations [0.5729426778193397]
We formulate flow matching in an infinite-dimensional function space to learn a probabilistic transport.<n>We develop a conditional neural operator architecture based on feature-wise linear modulation for flow-matching vector fields.<n>We show that the proposed method can accurately learn solution operators across different resolutions and fidelities.
arXiv Detail & Related papers (2025-12-14T16:06:10Z) - Physics-informed low-rank neural operators with application to parametric elliptic PDEs [0.0]
We present PILNO, a neural operator framework for approximating solution operators of partial differential equations (PDEs) on point cloud data.<n>PILNO combines low-rank kernel approximations with an encoder--decoder architecture, enabling fast, continuous one-shot predictions while remaining independent of specific discretizations.<n>We demonstrate its effectiveness on diverse problems, including function fitting, the Poisson equation, the screened Poisson equation with variable coefficients, and parameterized Darcy flow.
arXiv Detail & Related papers (2025-09-09T12:54:06Z) - Light-Weight Diffusion Multiplier and Uncertainty Quantification for Fourier Neural Operators [1.315766151337659]
We introduce DINOZAUR: a diffusion-based neural operator parametrization with uncertainty quantification.<n>Our method achieves competitive or superior performance across several PDE benchmarks.
arXiv Detail & Related papers (2025-08-01T13:57:19Z) - Dimension reduction for derivative-informed operator learning: An analysis of approximation errors [3.7051887945349518]
We study the derivative-informed learning of nonlinear operators between infinite-dimensional separable Hilbert spaces by neural networks.<n>We analyze the approximation errors of neural operators in Sobolev norms over infinite-dimensional Gaussian input measures.
arXiv Detail & Related papers (2025-04-11T17:56:52Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - FC-PINO: High Precision Physics-Informed Neural Operators via Fourier Continuation [60.706803227003995]
We introduce the FC-PINO (Fourier-Continuation-based PINO) architecture which extends the accuracy and efficiency of PINO to non-periodic and non-smooth PDEs.<n>We demonstrate that standard PINO struggles to solve non-periodic and non-smooth PDEs with high precision, across challenging benchmarks.<n>In contrast, the proposed FC-PINO provides accurate, robust, and scalable solutions, substantially outperforming PINO alternatives.
arXiv Detail & Related papers (2022-11-29T06:37:54Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Incorporating NODE with Pre-trained Neural Differential Operator for
Learning Dynamics [73.77459272878025]
We propose to enhance the supervised signal in learning dynamics by pre-training a neural differential operator (NDO)
NDO is pre-trained on a class of symbolic functions, and it learns the mapping between the trajectory samples of these functions to their derivatives.
We provide theoretical guarantee on that the output of NDO can well approximate the ground truth derivatives by proper tuning the complexity of the library.
arXiv Detail & Related papers (2021-06-08T08:04:47Z) - On the Estimation of Derivatives Using Plug-in Kernel Ridge Regression
Estimators [4.392844455327199]
We propose a simple plug-in kernel ridge regression (KRR) estimator in nonparametric regression.
We provide a non-asymotic analysis to study the behavior of the proposed estimator in a unified manner.
The proposed estimator achieves the optimal rate of convergence with the same choice of tuning parameter for any order of derivatives.
arXiv Detail & Related papers (2020-06-02T02:32:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.