Architecture-Optimization Co-Design for Physics-Informed Neural Networks Via Attentive Representations and Conflict-Resolved Gradients
- URL: http://arxiv.org/abs/2601.12971v1
- Date: Mon, 19 Jan 2026 11:32:25 GMT
- Title: Architecture-Optimization Co-Design for Physics-Informed Neural Networks Via Attentive Representations and Conflict-Resolved Gradients
- Authors: Pancheng Niu, Jun Guo, Qiaolin He, Yongming Chen, Yanchao Shi,
- Abstract summary: We study PINN training from a unified architecture-optimization perspective.<n>We propose a layer-wise dynamic attention mechanism to enhance representational flexibility.<n>We then reformulate PINN training as a multi-task learning problem and introduce a conflict-resolved gradient update strategy.
- Score: 5.447935819547941
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics-Informed Neural Networks (PINNs) provide a learning-based framework for solving partial differential equations (PDEs) by embedding governing physical laws into neural network training. In practice, however, their performance is often hindered by limited representational capacity and optimization difficulties caused by competing physical constraints and conflicting gradients. In this work, we study PINN training from a unified architecture-optimization perspective. We first propose a layer-wise dynamic attention mechanism to enhance representational flexibility, resulting in the Layer-wise Dynamic Attention PINN (LDA-PINN). We then reformulate PINN training as a multi-task learning problem and introduce a conflict-resolved gradient update strategy to alleviate gradient interference, leading to the Gradient-Conflict-Resolved PINN (GC-PINN). By integrating these two components, we develop the Architecture-Conflict-Resolved PINN (ACR-PINN), which combines attentive representations with conflict-aware optimization while preserving the standard PINN loss formulation. Extensive experiments on benchmark PDEs, including the Burgers, Helmholtz, Klein-Gordon, and lid-driven cavity flow problems, demonstrate that ACR-PINN achieves faster convergence and significantly lower relative $L_2$ and $L_\infty$ errors than standard PINNs. These results highlight the effectiveness of architecture-optimization co-design for improving the robustness and accuracy of PINN-based solvers.
Related papers
- LNN-PINN: A Unified Physics-Only Training Framework with Liquid Residual Blocks [1.6249267147413524]
LNN-PINN is a physics-informed neural network framework that incorporates a liquid residual gating architecture.<n>Across four benchmark problems, LNN-PINN consistently reduced RMSE and MAE under identical training conditions.
arXiv Detail & Related papers (2025-08-12T13:35:46Z) - Binarizing Physics-Inspired GNNs for Combinatorial Optimization [2.5782973781085383]
We show that the performance of PI-GNNs plummets with an increasing density of the problem graphs.<n>We propose principled alternatives to the naive strategy used in PI-GNNs by building on insights from fuzzy logic and binarized neural networks.<n>Our experiments demonstrate that the portfolio of proposed methods significantly improves the performance of PI-GNNs in increasingly dense settings.
arXiv Detail & Related papers (2025-07-18T07:11:50Z) - ProPINN: Demystifying Propagation Failures in Physics-Informed Neural Networks [71.02216400133858]
Physics-informed neural networks (PINNs) have earned high expectations in solving partial differential equations (PDEs)<n>Previous research observed the propagation failure phenomenon of PINNs.<n>This paper provides a formal and in-depth study of propagation failure and its root cause.
arXiv Detail & Related papers (2025-02-02T13:56:38Z) - Learn Singularly Perturbed Solutions via Homotopy Dynamics [7.890817997914349]
Training neural networks for singularly perturbed problems is challenging due to certain parameters in the PDEs that introduce near-singularities in the loss function.<n>We introduce a novel method based on homotopy dynamics to effectively manipulate these parameters.<n> Experimentally, we demonstrate that our approach significantly accelerates convergence and improves the accuracy of these singularly perturbed problems.
arXiv Detail & Related papers (2025-02-01T16:26:53Z) - Optimizing the Optimizer for Physics-Informed Neural Networks and Kolmogorov-Arnold Networks [3.758814046658822]
Physics-Informed Neural Networks (PINNs) have revolutionized the computation PDE solutions by integrating partialmagnitude equations (PDEs) into the neural network's training process as soft constraints.<n>More, physics-informed networks (PIKANs) have also been effective and comparable in accuracy.
arXiv Detail & Related papers (2025-01-22T21:19:42Z) - ASPINN: An asymptotic strategy for solving singularly perturbed differential equations [12.14934707131722]
We propose Asymptotic Physics-Informed Neural Networks (ASPINN), a generalization of Physics-Informed Neural Networks (PINN) and General-Kindred Physics-Informed Neural Networks (GKPINN)
ASPINN has a strong fitting ability for solving SPDEs due to the placement of exponential layers at the boundary layer.
We demonstrate the effect of ASPINN by solving diverse classes of SPDEs, which clearly shows that the ASPINN method is promising in boundary layer problems.
arXiv Detail & Related papers (2024-09-20T03:25:17Z) - General-Kindred Physics-Informed Neural Network to the Solutions of Singularly Perturbed Differential Equations [11.121415128908566]
We propose the General-Kindred Physics-Informed Neural Network (GKPINN) for solving Singular Perturbation Differential Equations (SPDEs)
This approach utilizes prior knowledge of the boundary layer from the equation and establishes a novel network to assist PINN in approxing the boundary layer.
The research findings underscore the exceptional performance of our novel approach, GKPINN, which delivers a remarkable enhancement in reducing the $L$ error by two to four orders of magnitude compared to the established PINN methodology.
arXiv Detail & Related papers (2024-08-27T02:03:22Z) - A Gaussian Process Framework for Solving Forward and Inverse Problems Involving Nonlinear Partial Differential Equations [0.0]
We introduce kernel-weighted Corrective Residuals (CoRes) to integrate the strengths of kernel methods and deep NNs for solving nonlinear PDE systems.
CoRes consistently outperforms competing methods in solving a broad range of benchmark problems.
We believe our findings have the potential to spark a renewed interest in leveraging kernel methods for solving PDEs.
arXiv Detail & Related papers (2024-01-07T14:09:42Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
Layer-wise Feedback feedback (LFP) is a novel training principle for neural network-like predictors.<n>LFP decomposes a reward to individual neurons based on their respective contributions.<n>Our method then implements a greedy reinforcing approach helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Revisiting PINNs: Generative Adversarial Physics-informed Neural
Networks and Point-weighting Method [70.19159220248805]
Physics-informed neural networks (PINNs) provide a deep learning framework for numerically solving partial differential equations (PDEs)
We propose the generative adversarial neural network (GA-PINN), which integrates the generative adversarial (GA) mechanism with the structure of PINNs.
Inspired from the weighting strategy of the Adaboost method, we then introduce a point-weighting (PW) method to improve the training efficiency of PINNs.
arXiv Detail & Related papers (2022-05-18T06:50:44Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.