LNN-PINN: A Unified Physics-Only Training Framework with Liquid Residual Blocks
- URL: http://arxiv.org/abs/2508.08935v2
- Date: Tue, 26 Aug 2025 01:32:11 GMT
- Title: LNN-PINN: A Unified Physics-Only Training Framework with Liquid Residual Blocks
- Authors: Ze Tao, Hanxuan Wang, Fujun Liu,
- Abstract summary: LNN-PINN is a physics-informed neural network framework that incorporates a liquid residual gating architecture.<n>Across four benchmark problems, LNN-PINN consistently reduced RMSE and MAE under identical training conditions.
- Score: 1.6249267147413524
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Physics-informed neural networks (PINNs) have attracted considerable attention for their ability to integrate partial differential equation priors into deep learning frameworks; however, they often exhibit limited predictive accuracy when applied to complex problems. To address this issue, we propose LNN-PINN, a physics-informed neural network framework that incorporates a liquid residual gating architecture while preserving the original physics modeling and optimization pipeline to improve predictive accuracy. The method introduces a lightweight gating mechanism solely within the hidden-layer mapping, keeping the sampling strategy, loss composition, and hyperparameter settings unchanged to ensure that improvements arise purely from architectural refinement. Across four benchmark problems, LNN-PINN consistently reduced RMSE and MAE under identical training conditions, with absolute error plots further confirming its accuracy gains. Moreover, the framework demonstrates strong adaptability and stability across varying dimensions, boundary conditions, and operator characteristics. In summary, LNN-PINN offers a concise and effective architectural enhancement for improving the predictive accuracy of physics-informed neural networks in complex scientific and engineering problems.
Related papers
- Scale-PINN: Learning Efficient Physics-Informed Neural Networks Through Sequential Correction [33.84065974605524]
Physics-informed neural networks (PINNs) have emerged as a promising mesh-free paradigm for solving partial differential equations.<n>We introduce the Sequential Correction Algorithm for Learning Efficient PINN (Scale-PINN), a learning strategy that bridges modern physics-informed learning with numerical algorithms.
arXiv Detail & Related papers (2026-02-23T03:38:06Z) - Architecture-Optimization Co-Design for Physics-Informed Neural Networks Via Attentive Representations and Conflict-Resolved Gradients [5.447935819547941]
We study PINN training from a unified architecture-optimization perspective.<n>We propose a layer-wise dynamic attention mechanism to enhance representational flexibility.<n>We then reformulate PINN training as a multi-task learning problem and introduce a conflict-resolved gradient update strategy.
arXiv Detail & Related papers (2026-01-19T11:32:25Z) - Mask-PINNs: Regulating Feature Distributions in Physics-Informed Neural Networks [1.6984490081106065]
Mask-PINNs regulates internal feature distributions through a smooth, learnable mask function applied pointwise across hidden layers.<n>We show consistent improvements in prediction accuracy, convergence stability, and robustness, with relative L2 errors reduced by up to two orders of magnitude over baseline models.
arXiv Detail & Related papers (2025-05-09T15:38:52Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.<n>A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.<n>The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - Improved physics-informed neural network in mitigating gradient related failures [11.356695216531328]
Physics-informed neural networks (PINNs) integrate fundamental physical principles with advanced data-driven techniques.
PINNs face persistent challenges with stiffness in gradient flow, which limits their predictive capabilities.
This paper presents an improved PINN to mitigate gradient-related failures.
arXiv Detail & Related papers (2024-07-28T07:58:10Z) - Stable Weight Updating: A Key to Reliable PDE Solutions Using Deep Learning [0.0]
This paper introduces novel residual-based architectures, designed to enhance stability and accuracy in physics-informed neural networks (PINNs)
The architectures augment traditional neural networks by incorporating residual connections, which facilitate smoother weight updates and improve backpropagation efficiency.
The Squared Residual Network, in particular, exhibits robust performance, achieving enhanced stability and accuracy compared to conventional neural networks.
arXiv Detail & Related papers (2024-07-10T05:20:43Z) - Enhancing Reliability of Neural Networks at the Edge: Inverted
Normalization with Stochastic Affine Transformations [0.22499166814992438]
We propose a method to inherently enhance the robustness and inference accuracy of BayNNs deployed in in-memory computing architectures.
Empirical results show a graceful degradation in inference accuracy, with an improvement of up to $58.11%$.
arXiv Detail & Related papers (2024-01-23T00:27:31Z) - Structure-Preserving Physics-Informed Neural Networks With Energy or
Lyapunov Structure [9.571966961251347]
We propose structure-preserving PINNs to improve their performance and broaden their applications for downstream tasks.
A framework that utilizes structure-preserving PINN for robust image recognition is proposed.
Experimental results demonstrate that the proposed method improves the numerical accuracy of PINNs for partial differential equations.
arXiv Detail & Related papers (2024-01-10T08:02:38Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
Layer-wise Feedback feedback (LFP) is a novel training principle for neural network-like predictors.<n>LFP decomposes a reward to individual neurons based on their respective contributions.<n>Our method then implements a greedy reinforcing approach helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - Auto-PINN: Understanding and Optimizing Physics-Informed Neural
Architecture [77.59766598165551]
Physics-informed neural networks (PINNs) are revolutionizing science and engineering practice by bringing together the power of deep learning to bear on scientific computation.
Here, we propose Auto-PINN, which employs Neural Architecture Search (NAS) techniques to PINN design.
A comprehensive set of pre-experiments using standard PDE benchmarks allows us to probe the structure-performance relationship in PINNs.
arXiv Detail & Related papers (2022-05-27T03:24:31Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.