Learning Under Laws: A Constraint-Projected Neural PDE Solver that Eliminates Hallucinations
- URL: http://arxiv.org/abs/2511.03578v1
- Date: Wed, 05 Nov 2025 16:01:19 GMT
- Title: Learning Under Laws: A Constraint-Projected Neural PDE Solver that Eliminates Hallucinations
- Authors: Mainak Singha,
- Abstract summary: Neural networks can approximate solutions to partial differential equations, but they often break the very laws they are meant to model.<n>We address this by training within the laws of physics rather than beside them.<n>Our framework, called Constraint-Projected Learning (CPL), keeps every update physically admissible.
- Score: 4.693270291878929
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural networks can approximate solutions to partial differential equations, but they often break the very laws they are meant to model-creating mass from nowhere, drifting shocks, or violating conservation and entropy. We address this by training within the laws of physics rather than beside them. Our framework, called Constraint-Projected Learning (CPL), keeps every update physically admissible by projecting network outputs onto the intersection of constraint sets defined by conservation, Rankine-Hugoniot balance, entropy, and positivity. The projection is differentiable and adds only about 10% computational overhead, making it fully compatible with back-propagation. We further stabilize training with total-variation damping (TVD) to suppress small oscillations and a rollout curriculum that enforces consistency over long prediction horizons. Together, these mechanisms eliminate both hard and soft violations: conservation holds at machine precision, total-variation growth vanishes, and entropy and error remain bounded. On Burgers and Euler systems, CPL produces stable, physically lawful solutions without loss of accuracy. Instead of hoping neural solvers will respect physics, CPL makes that behavior an intrinsic property of the learning process.
Related papers
- Unsupervised Physics-Informed Operator Learning through Multi-Stage Curriculum Training [1.5620806570871846]
We introduce a physics-informed training strategy that achieves convergence by enforcing boundary conditions in the loss landscape.<n>At each stage the limitation is re-formed, acting as a continuation mechanism that restores stability and prevents stagnation.<n>Across canonical benchmarks, PhIS-FNO attains a level of accuracy comparable to that of supervised learning.
arXiv Detail & Related papers (2026-02-02T16:06:57Z) - Conditionally adaptive augmented Lagrangian method for physics-informed learning of forward and inverse problems using artificial neural networks [0.24578723416255746]
We present several advances to the physics and equality constrained artificial neural networks (PECANN) framework.<n>We generalize the augmented Lagrangian method (ALM) to support multiple independent penalty parameters.<n>We reformulate pointwise constraint enforcement and Lagrange multipliers as expectations over constraint terms.
arXiv Detail & Related papers (2025-08-21T16:22:40Z) - PhysicsCorrect: A Training-Free Approach for Stable Neural PDE Simulations [4.7903561901859355]
We present PhysicsCorrect, a training-free correction framework that enforces PDE consistency at each prediction step.<n>Our key innovation is an efficient caching strategy that precomputes the Jacobian and its pseudoinverse during an offline warm-up phase.<n>Across three representative PDE systems, PhysicsCorrect reduces prediction errors by up to 100x while adding negligible inference time.
arXiv Detail & Related papers (2025-07-03T01:22:57Z) - Enabling Automatic Differentiation with Mollified Graph Neural Operators [73.52999622724101]
We propose the mollified graph neural operator ($m$GNO), the first method to leverage automatic differentiation and compute exact gradients on arbitrary geometries.<n>For a PDE example on regular grids, $m$GNO paired with autograd reduced the L2 relative data error by 20x compared to finite differences.<n>It can also solve PDEs on unstructured point clouds seamlessly, using physics losses only, at resolutions vastly lower than those needed for finite differences to be accurate enough.
arXiv Detail & Related papers (2025-04-11T06:16:30Z) - Coupled Integral PINN for conservation law [1.9720482348156743]
The Physics-Informed Neural Network (PINN) is an innovative approach to solve a diverse array of partial differential equations.
This paper introduces a novel Coupled Integrated PINN methodology that involves fitting the integral solutions equations using additional neural networks.
arXiv Detail & Related papers (2024-11-18T04:32:42Z) - A TVD neural network closure and application to turbulent combustion [1.374949083138427]
Trained neural networks (NN) have attractive features for closing governing equations.
A NN formulation is introduced to preclude spurious oscillations that violate solution boundedness or positivity.
It is embedded in the discretized equations as a machine learning closure and strictly constrained.
arXiv Detail & Related papers (2024-08-06T19:22:13Z) - Near-Optimal Solutions of Constrained Learning Problems [85.48853063302764]
In machine learning systems, the need to curtail their behavior has become increasingly apparent.
This is evidenced by recent advancements towards developing models that satisfy dual robustness variables.
Our results show that rich parametrizations effectively mitigate non-dimensional, finite learning problems.
arXiv Detail & Related papers (2024-03-18T14:55:45Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.