Numerical PDE solvers outperform neural PDE solvers
- URL: http://arxiv.org/abs/2507.21269v1
- Date: Mon, 28 Jul 2025 18:50:37 GMT
- Title: Numerical PDE solvers outperform neural PDE solvers
- Authors: Patrick Chatain, Michael Rizvi-Martel, Guillaume Rabusseau, Adam Oberman,
- Abstract summary: DeepFDM is a finite-difference framework for learning spatially varying coefficients in time-dependent partial differential equations.<n>It enforces stability and first-order convergence via CFL-compliant coefficient parameterizations.<n>It attains normalized mean-squared errors one to two orders of magnitude smaller than Fourier Neural Operators, U-Nets and ResNets.
- Score: 5.303553599778495
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present DeepFDM, a differentiable finite-difference framework for learning spatially varying coefficients in time-dependent partial differential equations (PDEs). By embedding a classical forward-Euler discretization into a convolutional architecture, DeepFDM enforces stability and first-order convergence via CFL-compliant coefficient parameterizations. Model weights correspond directly to PDE coefficients, yielding an interpretable inverse-problem formulation. We evaluate DeepFDM on a benchmark suite of scalar PDEs: advection, diffusion, advection-diffusion, reaction-diffusion and inhomogeneous Burgers' equations-in one, two and three spatial dimensions. In both in-distribution and out-of-distribution tests (quantified by the Hellinger distance between coefficient priors), DeepFDM attains normalized mean-squared errors one to two orders of magnitude smaller than Fourier Neural Operators, U-Nets and ResNets; requires 10-20X fewer training epochs; and uses 5-50X fewer parameters. Moreover, recovered coefficient fields accurately match ground-truth parameters. These results establish DeepFDM as a robust, efficient, and transparent baseline for data-driven solution and identification of parametric PDEs.
Related papers
- Kernel-Adaptive PI-ELMs for Forward and Inverse Problems in PDEs with Sharp Gradients [0.0]
This paper introduces the Kernel Adaptive Physics-Informed Extreme Learning Machine (KAPI-ELM)<n>It is designed to solve both forward and inverse Partial Differential Equation (PDE) problems involving localized sharp gradients.<n>KAPI-ELM achieves state-of-the-art accuracy in both forward and inverse settings.
arXiv Detail & Related papers (2025-07-14T13:03:53Z) - Guided Diffusion Sampling on Function Spaces with Applications to PDEs [111.87523128566781]
We propose a general framework for conditional sampling in PDE-based inverse problems.<n>This is accomplished by a function-space diffusion model and plug-and-play guidance for conditioning.<n>Our method achieves an average 32% accuracy improvement over state-of-the-art fixed-resolution diffusion baselines.
arXiv Detail & Related papers (2025-05-22T17:58:12Z) - Mechanistic PDE Networks for Discovery of Governing Equations [52.492158106791365]
We present Mechanistic PDE Networks, a model for discovery of partial differential equations from data.<n>The represented PDEs are then solved and decoded for specific tasks.<n>We develop a native, GPU-capable, parallel, sparse, and differentiable multigrid solver specialized for linear partial differential equations.
arXiv Detail & Related papers (2025-02-25T17:21:44Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Reduced-order modeling for parameterized PDEs via implicit neural
representations [4.135710717238787]
We present a new data-driven reduced-order modeling approach to efficiently solve parametrized partial differential equations (PDEs)
The proposed framework encodes PDE and utilizes a parametrized neural ODE (PNODE) to learn latent dynamics characterized by multiple PDE parameters.
We evaluate the proposed method at a large Reynolds number and obtain up to speedup of O(103) and 1% relative error to the ground truth values.
arXiv Detail & Related papers (2023-11-28T01:35:06Z) - LatentPINNs: Generative physics-informed neural networks via a latent
representation learning [0.0]
We introduce latentPINN, a framework that utilizes latent representations of the PDE parameters as additional (to the coordinates) inputs into PINNs.
We use a two-stage training scheme in which the first stage, we learn the latent representations for the distribution of PDE parameters.
In the second stage, we train a physics-informed neural network over inputs given by randomly drawn samples from the coordinate space within the solution domain.
arXiv Detail & Related papers (2023-05-11T16:54:17Z) - Multilevel CNNs for Parametric PDEs [0.0]
We combine concepts from multilevel solvers for partial differential equations with neural network based deep learning.
An in-depth theoretical analysis shows that the proposed architecture is able to approximate multigrid V-cycles to arbitrary precision.
We find substantial improvements over state-of-the-art deep learning-based solvers.
arXiv Detail & Related papers (2023-04-01T21:11:05Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - LordNet: An Efficient Neural Network for Learning to Solve Parametric Partial Differential Equations without Simulated Data [47.49194807524502]
We propose LordNet, a tunable and efficient neural network for modeling entanglements.
The experiments on solving Poisson's equation and (2D and 3D) Navier-Stokes equation demonstrate that the long-range entanglements can be well modeled by the LordNet.
arXiv Detail & Related papers (2022-06-19T14:41:08Z) - Stationary Density Estimation of It\^o Diffusions Using Deep Learning [6.8342505943533345]
We consider the density estimation problem associated with the stationary measure of ergodic Ito diffusions from a discrete-time series.
We employ deep neural networks to approximate the drift and diffusion terms of the SDE.
We establish the convergence of the proposed scheme under appropriate mathematical assumptions.
arXiv Detail & Related papers (2021-09-09T01:57:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.