Solving PDEs in One Shot via Fourier Features with Exact Analytical Derivatives
- URL: http://arxiv.org/abs/2602.10541v1
- Date: Wed, 11 Feb 2026 05:28:58 GMT
- Title: Solving PDEs in One Shot via Fourier Features with Exact Analytical Derivatives
- Authors: Antonin Sulc,
- Abstract summary: Recent random feature methods for solving partial differential equations (PDEs) reduce computational cost compared to physics-informed neural networks (PINNs)<n>We propose FastLSQ, which combines frozen random Fourier features with analytical operator assembly to solve linear PDEs via a single least-squares call.<n>On a benchmark of 17 PDEs spanning 1 to 6 dimensions, FastLSQ achieves relative $L2$ errors of $10-7$ in 0.07,s on linear problems, three orders of magnitude more accurate and significantly faster than state-of-the-art iterative PINN solvers.
- Score: 0.15229257192293197
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent random feature methods for solving partial differential equations (PDEs) reduce computational cost compared to physics-informed neural networks (PINNs) but still rely on iterative optimization or expensive derivative computation. We observe that sinusoidal random Fourier features possess a cyclic derivative structure: the derivative of any order of $\sin(\mathbf{W}\cdot\mathbf{x}+b)$ is a single sinusoid with a monomial prefactor, computable in $O(1)$ operations. Alternative activations such as $\tanh$, used in prior one-shot methods like PIELM, lack this property: their higher-order derivatives grow as $O(2^n)$ terms, requiring automatic differentiation for operator assembly. We propose FastLSQ, which combines frozen random Fourier features with analytical operator assembly to solve linear PDEs via a single least-squares call, and extend it to nonlinear PDEs via Newton--Raphson iteration where each linearized step is a FastLSQ solve. On a benchmark of 17 PDEs spanning 1 to 6 dimensions, FastLSQ achieves relative $L^2$ errors of $10^{-7}$ in 0.07\,s on linear problems, three orders of magnitude more accurate and significantly faster than state-of-the-art iterative PINN solvers, and $10^{-8}$ to $10^{-9}$ on nonlinear problems via Newton iteration in under 9s.
Related papers
- INC: An Indirect Neural Corrector for Auto-Regressive Hybrid PDE Solvers [61.84396402100827]
We propose the Indirect Neural Corrector ($mathrmINC$), which integrates learned corrections into the governing equations.<n>$mathrmINC$ reduces the error amplification on the order of $t-1 + L$, where $t$ is the timestep and $L$ the Lipschitz constant.<n>We test $mathrmINC$ in extensive benchmarks, covering numerous differentiable solvers, neural backbones, and test cases ranging from a 1D chaotic system to 3D turbulence.
arXiv Detail & Related papers (2025-11-16T20:14:28Z) - A Novel Quantum Fourier Ordinary Differential Equation Solver for Solving Linear and Nonlinear Partial Differential Equations [5.5115019901599505]
A novel quantum Fourier ordinary differential equation (ODE) solver is proposed to solve both linear and nonlinear partial differential equations (PDEs)<n>Traditional quantum ODE solvers transform a PDE into an ODE system via spatial discretization and then integrate it.<n>This approach not only simplifies the construction of the oracle R but also removes the restriction that $f(x)$ must lie within [0,1]
arXiv Detail & Related papers (2025-04-14T13:36:46Z) - Enabling Automatic Differentiation with Mollified Graph Neural Operators [73.52999622724101]
We propose the mollified graph neural operator ($m$GNO), the first method to leverage automatic differentiation and compute exact gradients on arbitrary geometries.<n>For a PDE example on regular grids, $m$GNO paired with autograd reduced the L2 relative data error by 20x compared to finite differences.<n>It can also solve PDEs on unstructured point clouds seamlessly, using physics losses only, at resolutions vastly lower than those needed for finite differences to be accurate enough.
arXiv Detail & Related papers (2025-04-11T06:16:30Z) - A Quasilinear Algorithm for Computing Higher-Order Derivatives of Deep Feed-Forward Neural Networks [0.0]
$n$-TangentProp computes the exact derivative $dn/dxn f(x)$ in quasilinear, instead of exponential time.<n>We demonstrate that our method is particularly beneficial in the context of physics-informed neural networks.
arXiv Detail & Related papers (2024-12-12T22:57:28Z) - Stochastic Taylor Derivative Estimator: Efficient amortization for arbitrary differential operators [29.063441432499776]
We show how to efficiently perform arbitrary contraction of the derivative tensor of arbitrary order for multivariate functions.<n>When applied to Physics-Informed Neural Networks (PINNs), our method provides >1000$times$ speed-up and.<n>30$times$ memory reduction over randomization with first-order AD.
arXiv Detail & Related papers (2024-11-27T09:37:33Z) - On the estimation rate of Bayesian PINN for inverse problems [10.100602879566782]
Solving partial differential equations (PDEs) and their inverse problems using Physics-informed neural networks (PINNs) is a rapidly growing approach in the physics and machine learning community.
We study the behavior of a Bayesian PINN estimator of the solution of a PDE from $n$ independent noisy measurement of the solution.
arXiv Detail & Related papers (2024-06-21T01:13:18Z) - Towards large-scale quantum optimization solvers with few qubits [59.63282173947468]
We introduce a variational quantum solver for optimizations over $m=mathcalO(nk)$ binary variables using only $n$ qubits, with tunable $k>1$.
We analytically prove that the specific qubit-efficient encoding brings in a super-polynomial mitigation of barren plateaus as a built-in feature.
arXiv Detail & Related papers (2024-01-17T18:59:38Z) - Average-Case Complexity of Tensor Decomposition for Low-Degree
Polynomials [93.59919600451487]
"Statistical-computational gaps" occur in many statistical inference tasks.
We consider a model for random order-3 decomposition where one component is slightly larger in norm than the rest.
We show that tensor entries can accurately estimate the largest component when $ll n3/2$ but fail to do so when $rgg n3/2$.
arXiv Detail & Related papers (2022-11-10T00:40:37Z) - Shallow neural network representation of polynomials [91.3755431537592]
We show that $d$-variables of degreeR$ can be represented on $[0,1]d$ as shallow neural networks of width $d+1+sum_r=2Rbinomr+d-1d-1d-1[binomr+d-1d-1d-1[binomr+d-1d-1d-1[binomr+d-1d-1d-1d-1[binomr+d-1d-1d-1d-1
arXiv Detail & Related papers (2022-08-17T08:14:52Z) - Efficient quantum algorithm for nonlinear reaction-diffusion equations
and energy estimation [5.576305273694895]
We develop an efficient quantum algorithm based on [1] for a class of nonlinear partial differential equations (PDEs)
We show how to estimate the mean square kinetic energy in the solution by postprocessing the quantum state that encodes it to extract derivative information.
As applications, we consider the Fisher-KPP and Allen-Cahn equations, which have interpretations in classical physics.
arXiv Detail & Related papers (2022-05-02T18:15:32Z) - Finding Global Minima via Kernel Approximations [90.42048080064849]
We consider the global minimization of smooth functions based solely on function evaluations.
In this paper, we consider an approach that jointly models the function to approximate and finds a global minimum.
arXiv Detail & Related papers (2020-12-22T12:59:30Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.