Beyond Loss Guidance: Using PDE Residuals as Spectral Attention in Diffusion Neural Operators
- URL: http://arxiv.org/abs/2512.01370v1
- Date: Mon, 01 Dec 2025 07:34:42 GMT
- Title: Beyond Loss Guidance: Using PDE Residuals as Spectral Attention in Diffusion Neural Operators
- Authors: Medha Sawhney, Abhilash Neog, Mridul Khurana, Anuj Karpatne,
- Abstract summary: PRISMA is a conditional diffusion neural operator that embeds PDE residuals directly into the model's architecture.<n>We show that PRISMA has competitive accuracy, at substantially lower inference costs, compared to previous methods across five benchmark PDEs.
- Score: 7.854118707141527
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diffusion-based solvers for partial differential equations (PDEs) are often bottle-necked by slow gradient-based test-time optimization routines that use PDE residuals for loss guidance. They additionally suffer from optimization instabilities and are unable to dynamically adapt their inference scheme in the presence of noisy PDE residuals. To address these limitations, we introduce PRISMA (PDE Residual Informed Spectral Modulation with Attention), a conditional diffusion neural operator that embeds PDE residuals directly into the model's architecture via attention mechanisms in the spectral domain, enabling gradient-descent free inference. In contrast to previous methods that use PDE loss solely as external optimization targets, PRISMA integrates PDE residuals as integral architectural features, making it inherently fast, robust, accurate, and free from sensitive hyperparameter tuning. We show that PRISMA has competitive accuracy, at substantially lower inference costs, compared to previous methods across five benchmark PDEs, especially with noisy observations, while using 10x to 100x fewer denoising steps, leading to 15x to 250x faster inference.
Related papers
- Adaptive Stochastic Coefficients for Accelerating Diffusion Sampling [11.922538785783587]
We introduce AdaSDE, a novel single-step SDE solver that aims to unify the efficiency of ODEs with the error resilience of SDEs.<n>Specifically, we introduce a single per-step learnable coefficient, estimated via lightweight distillation, which dynamically regulates the error correction strength to accelerate diffusion sampling.<n>At 5 NFE, AdaSDE achieves FID scores of 4.18 on CIFAR-10, 8.05 on FFHQ and 6.96 on LSUN Bedroom.
arXiv Detail & Related papers (2025-10-27T12:53:48Z) - Kernel-Adaptive PI-ELMs for Forward and Inverse Problems in PDEs with Sharp Gradients [0.0]
This paper introduces the Kernel Adaptive Physics-Informed Extreme Learning Machine (KAPI-ELM)<n>It is designed to solve both forward and inverse Partial Differential Equation (PDE) problems involving localized sharp gradients.<n>KAPI-ELM achieves state-of-the-art accuracy in both forward and inverse settings.
arXiv Detail & Related papers (2025-07-14T13:03:53Z) - PDE-aware Optimizer for Physics-informed Neural Networks [0.0]
We propose a PDE-aware that adapts parameter updates based on the variance of per-sample PDE residual.<n>This method addresses gradient misalignment without incurring the heavy computational costs of second-order gradients such as SOAP.<n>Our results demonstrate the effectiveness of PDE residual-aware adaptivity in enhancing stability in PINNs training.
arXiv Detail & Related papers (2025-07-10T19:07:55Z) - Physics-Informed Distillation of Diffusion Models for PDE-Constrained Generation [19.734778762515468]
diffusion models have gained increasing attention in the modeling of physical systems, particularly those governed by partial differential equations (PDEs)<n>We propose a simple yet effective post-hoc distillation approach, where PDE constraints are not injected directly into the diffusion process, but instead enforced during a post-hoc distillation stage.
arXiv Detail & Related papers (2025-05-28T14:17:58Z) - Guided Diffusion Sampling on Function Spaces with Applications to PDEs [112.09025802445329]
We propose a general framework for conditional sampling in PDE-based inverse problems.<n>This is accomplished by a function-space diffusion model and plug-and-play guidance for conditioning.<n>Our method achieves an average 32% accuracy improvement over state-of-the-art fixed-resolution diffusion baselines.
arXiv Detail & Related papers (2025-05-22T17:58:12Z) - Advancing Generalization in PINNs through Latent-Space Representations [71.86401914779019]
Physics-informed neural networks (PINNs) have made significant strides in modeling dynamical systems governed by partial differential equations (PDEs)<n>We propose PIDO, a novel physics-informed neural PDE solver designed to generalize effectively across diverse PDE configurations.<n>We validate PIDO on a range of benchmarks, including 1D combined equations and 2D Navier-Stokes equations.
arXiv Detail & Related papers (2024-11-28T13:16:20Z) - Unisolver: PDE-Conditional Transformers Towards Universal Neural PDE Solvers [53.79279286773326]
We present Unisolver, a novel Transformer model trained on diverse data and conditioned on diverse PDEs.<n>Unisolver achieves consistent state-of-the-art on three challenging large-scale benchmarks, showing impressive performance and generalizability.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Elucidating the solution space of extended reverse-time SDE for diffusion models [23.637881476921596]
We formulate the sampling process as an Extended Reverse-Time SDE (ER SDE)<n>We offer exact solutions and approximate solutions for SDE and VE SDE, respectively.<n>We devise efficient high-quality samplers, namely ER-SDE-rs.
arXiv Detail & Related papers (2023-09-12T12:27:17Z) - PDE-Refiner: Achieving Accurate Long Rollouts with Neural PDE Solvers [40.097474800631]
Time-dependent partial differential equations (PDEs) are ubiquitous in science and engineering.
Deep neural network based surrogates have gained increased interest.
arXiv Detail & Related papers (2023-08-10T17:53:05Z) - LordNet: An Efficient Neural Network for Learning to Solve Parametric Partial Differential Equations without Simulated Data [47.49194807524502]
We propose LordNet, a tunable and efficient neural network for modeling entanglements.
The experiments on solving Poisson's equation and (2D and 3D) Navier-Stokes equation demonstrate that the long-range entanglements can be well modeled by the LordNet.
arXiv Detail & Related papers (2022-06-19T14:41:08Z) - Learning to Accelerate Partial Differential Equations via Latent Global
Evolution [64.72624347511498]
Latent Evolution of PDEs (LE-PDE) is a simple, fast and scalable method to accelerate the simulation and inverse optimization of PDEs.
We introduce new learning objectives to effectively learn such latent dynamics to ensure long-term stability.
We demonstrate up to 128x reduction in the dimensions to update, and up to 15x improvement in speed, while achieving competitive accuracy.
arXiv Detail & Related papers (2022-06-15T17:31:24Z) - Stochastic Normalizing Flows [52.92110730286403]
We introduce normalizing flows for maximum likelihood estimation and variational inference (VI) using differential equations (SDEs)
Using the theory of rough paths, the underlying Brownian motion is treated as a latent variable and approximated, enabling efficient training of neural SDEs.
These SDEs can be used for constructing efficient chains to sample from the underlying distribution of a given dataset.
arXiv Detail & Related papers (2020-02-21T20:47:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.