Physics-Informed Distillation of Diffusion Models for PDE-Constrained Generation
- URL: http://arxiv.org/abs/2505.22391v1
- Date: Wed, 28 May 2025 14:17:58 GMT
- Title: Physics-Informed Distillation of Diffusion Models for PDE-Constrained Generation
- Authors: Yi Zhang, Difan Zou,
- Abstract summary: diffusion models have gained increasing attention in the modeling of physical systems, particularly those governed by partial differential equations (PDEs)<n>We propose a simple yet effective post-hoc distillation approach, where PDE constraints are not injected directly into the diffusion process, but instead enforced during a post-hoc distillation stage.
- Score: 19.734778762515468
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modeling physical systems in a generative manner offers several advantages, including the ability to handle partial observations, generate diverse solutions, and address both forward and inverse problems. Recently, diffusion models have gained increasing attention in the modeling of physical systems, particularly those governed by partial differential equations (PDEs). However, diffusion models only access noisy data $\boldsymbol{x}_t$ at intermediate steps, making it infeasible to directly enforce constraints on the clean sample $\boldsymbol{x}_0$ at each noisy level. As a workaround, constraints are typically applied to the expectation of clean samples $\mathbb{E}[\boldsymbol{x}_0|\boldsymbol{x}_t]$, which is estimated using the learned score network. However, imposing PDE constraints on the expectation does not strictly represent the one on the true clean data, known as Jensen's Gap. This gap creates a trade-off: enforcing PDE constraints may come at the cost of reduced accuracy in generative modeling. To address this, we propose a simple yet effective post-hoc distillation approach, where PDE constraints are not injected directly into the diffusion process, but instead enforced during a post-hoc distillation stage. We term our method as Physics-Informed Distillation of Diffusion Models (PIDDM). This distillation not only facilitates single-step generation with improved PDE satisfaction, but also support both forward and inverse problem solving and reconstruction from randomly partial observation. Extensive experiments across various PDE benchmarks demonstrate that PIDDM significantly improves PDE satisfaction over several recent and competitive baselines, such as PIDM, DiffusionPDE, and ECI-sampling, with less computation overhead. Our approach can shed light on more efficient and effective strategies for incorporating physical constraints into diffusion models.
Related papers
- Reinforcement Learning Closures for Underresolved Partial Differential Equations using Synthetic Data [3.835798175447222]
Partial Differential Equations describe phenomena ranging from epidemics to quantum mechanics and financial markets.<n>Despite recent advances in computational science, solving such PDEs for real-world applications remains expensive because of the necessity of resolving a broad range oftemporal scales.<n>We present a framework for developing closure models for PDEs using synthetic data acquired through the method of manufactured solutions.
arXiv Detail & Related papers (2025-05-16T14:34:42Z) - Identifying Drift, Diffusion, and Causal Structure from Temporal Snapshots [10.018568337210876]
APPEX is an iterative algorithm designed to estimate the drift, diffusion, and causal graph of an additive noise SDE, solely from temporal marginals.<n>We show that APPEX iteratively decreases Kullback-Leibler divergence to the true solution, and demonstrate its effectiveness on simulated data from linear additive noise SDEs.
arXiv Detail & Related papers (2024-10-30T06:28:21Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - DiffusionPDE: Generative PDE-Solving Under Partial Observation [10.87702379899977]
We introduce a general framework for solving partial differential equations (PDEs) using generative diffusion models.
We show that the learned generative priors lead to a versatile framework for accurately solving a wide range of PDEs under partial observation.
arXiv Detail & Related papers (2024-06-25T17:48:24Z) - AdjointDEIS: Efficient Gradients for Diffusion Models [2.0795007613453445]
We show that continuous adjoint equations for diffusion SDEs actually simplify to a simple ODE.<n>We also demonstrate the effectiveness of AdjointDEIS for guided generation with an adversarial attack in the form of the face morphing problem.
arXiv Detail & Related papers (2024-05-23T19:51:33Z) - Distilling Diffusion Models into Conditional GANs [90.76040478677609]
We distill a complex multistep diffusion model into a single-step conditional GAN student model.
For efficient regression loss, we propose E-LatentLPIPS, a perceptual loss operating directly in diffusion model's latent space.
We demonstrate that our one-step generator outperforms cutting-edge one-step diffusion distillation models.
arXiv Detail & Related papers (2024-05-09T17:59:40Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Closing the ODE-SDE gap in score-based diffusion models through the
Fokker-Planck equation [0.562479170374811]
We rigorously describe the range of dynamics and approximations that arise when training score-based diffusion models.
We show numerically that conventional score-based diffusion models can exhibit significant differences between ODE- and SDE-induced distributions.
arXiv Detail & Related papers (2023-11-27T16:44:50Z) - Gaussian Mixture Solvers for Diffusion Models [84.83349474361204]
We introduce a novel class of SDE-based solvers called GMS for diffusion models.
Our solver outperforms numerous SDE-based solvers in terms of sample quality in image generation and stroke-based synthesis.
arXiv Detail & Related papers (2023-11-02T02:05:38Z) - Semi-Implicit Denoising Diffusion Models (SIDDMs) [50.30163684539586]
Existing models such as Denoising Diffusion Probabilistic Models (DDPM) deliver high-quality, diverse samples but are slowed by an inherently high number of iterative steps.
We introduce a novel approach that tackles the problem by matching implicit and explicit factors.
We demonstrate that our proposed method obtains comparable generative performance to diffusion-based models and vastly superior results to models with a small number of sampling steps.
arXiv Detail & Related papers (2023-06-21T18:49:22Z) - Lipschitz Singularities in Diffusion Models [64.28196620345808]
Diffusion models often display the infinite Lipschitz property of the network with respect to time variable near the zero point.<n>We propose a novel approach, dubbed E-TSDM, which alleviates the Lipschitz singularities of the diffusion model near the zero point.<n>Our work may advance the understanding of the general diffusion process, and also provide insights for the design of diffusion models.
arXiv Detail & Related papers (2023-06-20T03:05:28Z) - Reflected Diffusion Models [93.26107023470979]
We present Reflected Diffusion Models, which reverse a reflected differential equation evolving on the support of the data.
Our approach learns the score function through a generalized score matching loss and extends key components of standard diffusion models.
arXiv Detail & Related papers (2023-04-10T17:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.