Spectral Analysis of Hard-Constraint PINNs: The Spatial Modulation Mechanism of Boundary Functions
- URL: http://arxiv.org/abs/2512.23295v1
- Date: Mon, 29 Dec 2025 08:31:58 GMT
- Title: Spectral Analysis of Hard-Constraint PINNs: The Spatial Modulation Mechanism of Boundary Functions
- Authors: Yuchen Xie, Honghang Chi, Haopeng Quan, Yahui Wang, Wei Wang, Yu Ma,
- Abstract summary: This work reveals that the boundary function $B$ introduces a multiplicative spatial modulation that fundamentally alters the learning landscape.<n>A rigorous Neural Tangent Kernel (NTK) framework for HC-PINNs is established, deriving the explicit kernel composition law.<n>It is shown that widely used boundary functions can inadvertently induce spectral collapse, leading to optimization stagnation despite exact boundary satisfaction.
- Score: 4.170072254495455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Physics-Informed Neural Networks with hard constraints (HC-PINNs) are increasingly favored for their ability to strictly enforce boundary conditions via a trial function ansatz $\tilde{u} = A + B \cdot N$, yet the theoretical mechanisms governing their training dynamics have remained unexplored. Unlike soft-constrained formulations where boundary terms act as additive penalties, this work reveals that the boundary function $B$ introduces a multiplicative spatial modulation that fundamentally alters the learning landscape. A rigorous Neural Tangent Kernel (NTK) framework for HC-PINNs is established, deriving the explicit kernel composition law. This relationship demonstrates that the boundary function $B(\vec{x})$ functions as a spectral filter, reshaping the eigenspectrum of the neural network's native kernel. Through spectral analysis, the effective rank of the residual kernel is identified as a deterministic predictor of training convergence, superior to classical condition numbers. It is shown that widely used boundary functions can inadvertently induce spectral collapse, leading to optimization stagnation despite exact boundary satisfaction. Validated across multi-dimensional benchmarks, this framework transforms the design of boundary functions from a heuristic choice into a principled spectral optimization problem, providing a solid theoretical foundation for geometric hard constraints in scientific machine learning.
Related papers
- A Boundary Integral-based Neural Operator for Mesh Deformation [10.460831049056761]
This paper presents an efficient mesh deformation method based on boundary integration and neural operators.<n>A key technical advantage of our framework is the mathematical decoupling of the physical integration process from the geometric representation.<n> Numerical experiments, including large deformations of flexible beams and rigid-body motions of NACA airfoils, confirm the model's high accuracy and strict adherence to the principles of linearity and superposition.
arXiv Detail & Related papers (2026-02-27T06:09:07Z) - Unsupervised Physics-Informed Operator Learning through Multi-Stage Curriculum Training [1.5620806570871846]
We introduce a physics-informed training strategy that achieves convergence by enforcing boundary conditions in the loss landscape.<n>At each stage the limitation is re-formed, acting as a continuation mechanism that restores stability and prevents stagnation.<n>Across canonical benchmarks, PhIS-FNO attains a level of accuracy comparable to that of supervised learning.
arXiv Detail & Related papers (2026-02-02T16:06:57Z) - Physics-Informed Chebyshev Polynomial Neural Operator for Parametric Partial Differential Equations [17.758049557300826]
We introduce the Physics-Informed Chebyshev Polynomial Neural Operator (CPNO)<n>CPNO replaces unstable monomial expansions with numerically stable Chebyshev spectral basis.<n> Experiments on benchmark parameterized PDEs show that CPNO achieves superior accuracy, faster convergence, and enhanced robustness to hyper parameters.
arXiv Detail & Related papers (2026-02-02T07:19:56Z) - NeuraLSP: An Efficient and Rigorous Neural Left Singular Subspace Preconditioner for Conjugate Gradient Methods [49.84495044725856]
NeuraLSP is a novel neural preconditioner combined with a novel loss metric.<n>Our method exhibits both theoretical guarantees and empirical robustness to rank inflation, up to a 53% speedup.
arXiv Detail & Related papers (2026-01-28T02:15:16Z) - A neural optimization framework for free-boundary diffeomorphic mapping problems and its applications [0.42970700836450487]
We propose a neural surrogate, the Spectral Beltrami Network (SBN), that embeds LSQC energy into a multiscale mesh-spectral architecture.<n>Next, we propose the SBN guided optimization framework SBN-Opt which optimize free-boundary diffeomorphism for the problem, with local geometric distortion explicitly controllable.
arXiv Detail & Related papers (2025-11-12T03:43:28Z) - Graph Neural Regularizers for PDE Inverse Problems [62.49743146797144]
We present a framework for solving a broad class of ill-posed inverse problems governed by partial differential equations (PDEs)<n>The forward problem is numerically solved using the finite element method (FEM)<n>We employ physics-inspired graph neural networks as learned regularizers, providing a robust, interpretable, and generalizable alternative to standard approaches.
arXiv Detail & Related papers (2025-10-23T21:43:25Z) - Generalization Bound of Gradient Flow through Training Trajectory and Data-dependent Kernel [55.82768375605861]
We establish a generalization bound for gradient flow that aligns with the classical Rademacher complexity for kernel methods.<n>Unlike static kernels such as NTK, the LPK captures the entire training trajectory, adapting to both data and optimization dynamics.
arXiv Detail & Related papers (2025-06-12T23:17:09Z) - Reliable and efficient inverse analysis using physics-informed neural networks with normalized distance functions and adaptive weight tuning [0.0]
PINN solutions are often limited by the treatment of boundary conditions.<n>We propose an integrated framework that combines adaptive distance field with normalized weight.<n>This framework offers a reliable and efficient framework for inverse analysis using PINNs.
arXiv Detail & Related papers (2025-04-25T05:39:09Z) - The Finite Element Neural Network Method: One Dimensional Study [0.0]
This research introduces the finite element neural network method (FENNM) within the framework of the Petrov-Galerkin method.<n>FENNM uses convolution operations to approximate the weighted residual of the differential equations.<n>This enables the integration of forcing terms and natural boundary conditions into the loss function similar to conventional finite element method (FEM) solvers.
arXiv Detail & Related papers (2025-01-21T21:39:56Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Neural Fields with Hard Constraints of Arbitrary Differential Order [61.49418682745144]
We develop a series of approaches for enforcing hard constraints on neural fields.
The constraints can be specified as a linear operator applied to the neural field and its derivatives.
Our approaches are demonstrated in a wide range of real-world applications.
arXiv Detail & Related papers (2023-06-15T08:33:52Z) - A Functional-Space Mean-Field Theory of Partially-Trained Three-Layer
Neural Networks [49.870593940818715]
We study the infinite-width limit of a type of three-layer NN model whose first layer is random and fixed.
Our theory accommodates different scaling choices of the model, resulting in two regimes of the MF limit that demonstrate distinctive behaviors.
arXiv Detail & Related papers (2022-10-28T17:26:27Z) - Convex Analysis of the Mean Field Langevin Dynamics [49.66486092259375]
convergence rate analysis of the mean field Langevin dynamics is presented.
$p_q$ associated with the dynamics allows us to develop a convergence theory parallel to classical results in convex optimization.
arXiv Detail & Related papers (2022-01-25T17:13:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.