Stabilizing Physics-Informed Consistency Models via Structure-Preserving Training
- URL: http://arxiv.org/abs/2602.09303v1
- Date: Tue, 10 Feb 2026 00:40:19 GMT
- Title: Stabilizing Physics-Informed Consistency Models via Structure-Preserving Training
- Authors: Che-Chia Chang, Chen-Yang Dai, Te-Sheng Lin, Ming-Chih Lai, Chieh-Hsin Lai,
- Abstract summary: We propose a physics-informed consistency modeling framework for solving partial differential equations (PDEs)<n>We identify a key stability challenge in physics-constrained consistency training, where PDE residuals can drive the model toward trivial or degenerate solutions.<n>We introduce a structure-preserving two-stage training strategy that decouples distribution learning from physics enforcement.
- Score: 7.031010831953522
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a physics-informed consistency modeling framework for solving partial differential equations (PDEs) via fast, few-step generative inference. We identify a key stability challenge in physics-constrained consistency training, where PDE residuals can drive the model toward trivial or degenerate solutions, degrading the learned data distribution. To address this, we introduce a structure-preserving two-stage training strategy that decouples distribution learning from physics enforcement by freezing the coefficient decoder during physics-informed fine-tuning. We further propose a two-step residual objective that enforces physical consistency on refined, structurally valid generative trajectories rather than noisy single-step predictions. The resulting framework enables stable, high-fidelity inference for both unconditional generation and forward problems. We demonstrate that forward solutions can be obtained via a projection-based zero-shot inpainting procedure, achieving consistent accuracy of diffusion baselines with orders of magnitude reduction in computational cost.
Related papers
- Variational Grey-Box Dynamics Matching [45.595103078998385]
We present a novel grey-box method that integrates incomplete physics models directly into generative models.<n>Our approach learns dynamics from observational trajectories alone, without ground-truth physics parameters.<n>Our experiments on representative ODE/PDE problems show that our method performs on par with or superior to fully data-driven approaches.
arXiv Detail & Related papers (2026-02-19T15:43:22Z) - Conditional Denoising Model as a Physical Surrogate Model [1.0616273526777913]
We introduce a generative model designed to learn the geometry of the physical manifold itself.<n>By training the network to restore clean states from noisy ones, the model learns a vector field that points continuously towards the valid solution subspace.
arXiv Detail & Related papers (2026-01-28T20:32:20Z) - ETC: training-free diffusion models acceleration with Error-aware Trend Consistency [46.40478218579471]
Recent training-free methods accelerate diffusion process by reusing model outputs.<n>These methods ignore denoising trends and lack error control for model-specific tolerance.<n>We introduce Error-aware Trend Consistency (ETC), a framework that leverages the smooth continuity of diffusion trajectories.<n>ETC achieves a 2.65x acceleration over FLUX with negligible degradation of consistency.
arXiv Detail & Related papers (2025-10-28T07:08:09Z) - ResAD: Normalized Residual Trajectory Modeling for End-to-End Autonomous Driving [64.42138266293202]
ResAD is a Normalized Residual Trajectory Modeling framework.<n>It reframes the learning task to predict the residual deviation from an inertial reference.<n>On the NAVSIM benchmark, ResAD achieves a state-of-the-art PDMS of 88.6 using a vanilla diffusion policy.
arXiv Detail & Related papers (2025-10-09T17:59:36Z) - Physics-Constrained Fine-Tuning of Flow-Matching Models for Generation and Inverse Problems [3.3811247908085855]
We present a framework for fine-tuning flow-matching generative models to enforce physical constraints and solve inverse problems in scientific systems.<n>Our approach bridges generative modelling and scientific inference, opening new avenues for simulation-augmented discovery and data-efficient modelling of physical systems.
arXiv Detail & Related papers (2025-08-05T09:32:04Z) - PhysicsCorrect: A Training-Free Approach for Stable Neural PDE Simulations [4.7903561901859355]
We present PhysicsCorrect, a training-free correction framework that enforces PDE consistency at each prediction step.<n>Our key innovation is an efficient caching strategy that precomputes the Jacobian and its pseudoinverse during an offline warm-up phase.<n>Across three representative PDE systems, PhysicsCorrect reduces prediction errors by up to 100x while adding negligible inference time.
arXiv Detail & Related papers (2025-07-03T01:22:57Z) - Flow Matching Meets PDEs: A Unified Framework for Physics-Constrained Generation [21.321570407292263]
We propose Physics-Based Flow Matching, a generative framework that embeds physical constraints, both PDE residuals and algebraic relations, into the flow matching objective.<n>We show that our approach yields up to an $8times$ more accurate physical residuals compared to FM, while clearly outperforming existing algorithms in terms of distributional accuracy.
arXiv Detail & Related papers (2025-06-10T09:13:37Z) - Physics-informed Reduced Order Modeling of Time-dependent PDEs via Differentiable Solvers [1.224954637705144]
We propose Physics-informed ROM ($Phi$-ROM) by incorporating differentiable PDE solvers into the training procedure.<n>Our model outperforms state-of-the-art data-driven ROMs and other physics-informed strategies.<n>$Phi$-ROM learns to recover and forecast the solution fields even when trained or evaluated with sparse and irregular observations of the fields.
arXiv Detail & Related papers (2025-05-20T16:47:04Z) - Paving the way for scientific foundation models: enhancing generalization and robustness in PDEs with constraint-aware pre-training [49.8035317670223]
A scientific foundation model (SciFM) is emerging as a promising tool for learning transferable representations across diverse domains.<n>We propose incorporating PDE residuals into pre-training either as the sole learning signal or in combination with data loss to compensate for limited or infeasible training data.<n>Our results show that pre-training with PDE constraints significantly enhances generalization, outperforming models trained solely on solution data.
arXiv Detail & Related papers (2025-03-24T19:12:39Z) - Gradient-Free Generation for Hard-Constrained Systems [41.558608119074755]
Existing constrained generative models rely heavily on gradient information, which is often sparse or computationally expensive in some fields.<n>We introduce a novel framework for adapting pre-trained, unconstrained flow-matching models to satisfy constraints exactly in a zero-shot manner.
arXiv Detail & Related papers (2024-12-02T18:36:26Z) - Advancing Generalization in PINNs through Latent-Space Representations [71.86401914779019]
Physics-informed neural networks (PINNs) have made significant strides in modeling dynamical systems governed by partial differential equations (PDEs)<n>We propose PIDO, a novel physics-informed neural PDE solver designed to generalize effectively across diverse PDE configurations.<n>We validate PIDO on a range of benchmarks, including 1D combined equations and 2D Navier-Stokes equations.
arXiv Detail & Related papers (2024-11-28T13:16:20Z) - A Posteriori Evaluation of a Physics-Constrained Neural Ordinary
Differential Equations Approach Coupled with CFD Solver for Modeling Stiff
Chemical Kinetics [4.125745341349071]
We extend the NeuralODE framework for stiff chemical kinetics by incorporating mass conservation constraints directly into the loss function during training.
This ensures that the total mass and the elemental mass are conserved, a critical requirement for reliable downstream integration with CFD solvers.
arXiv Detail & Related papers (2023-11-22T22:40:49Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.