A decomposition-based robust training of physics-informed neural networks for nearly incompressible linear elasticity
- URL: http://arxiv.org/abs/2505.21994v2
- Date: Thu, 23 Oct 2025 04:16:58 GMT
- Title: A decomposition-based robust training of physics-informed neural networks for nearly incompressible linear elasticity
- Authors: Josef Dick, Seungchan Ko, Quoc Thong Le Gia, Kassem Mustapha, Sanghyeon Park,
- Abstract summary: We show that low-order conforming finite element methods for nearly incompressible elasticity equations deteriorate as the Lam'e coefficient $lambdatoinfty$.<n>This phenomenon, known as locking or non-robustness, remains not fully understood despite extensive investigation.<n>We propose a robust decomposition-based PINN framework that reformulates the elasticity equations into balanced subsystems.
- Score: 1.1744028458220428
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to divergence instability, the accuracy of low-order conforming finite element methods for nearly incompressible elasticity equations deteriorates as the Lam\'e coefficient $\lambda\to\infty$, or equivalently as the Poisson ratio $\nu\to1/2$. This phenomenon, known as locking or non-robustness, remains not fully understood despite extensive investigation. In this work, we illustrate first that an analogous instability arises when applying the popular Physics-Informed Neural Networks (PINNs) to nearly incompressible elasticity problems, leading to significant loss of accuracy and convergence difficulties. Then, to overcome this challenge, we propose a robust decomposition-based PINN framework that reformulates the elasticity equations into balanced subsystems, thereby eliminating the ill-conditioning that causes locking. Our approach simultaneously solves the forward and inverse problems to recover both the decomposed field variables and the associated external conditions. We will also perform a convergence analysis to further enhance the reliability of the proposed approach. Moreover, through various numerical experiments, including constant, variable and parametric Lam\'e coefficients, we illustrate the efficiency of the proposed methodology.
Related papers
- Variation-Bounded Loss for Noise-Tolerant Learning [105.20373602308284]
We introduce the Variation Ratio as a novel property related to the robustness of loss functions.<n>We propose a new family of robust loss functions, termed Variation-Bounded Loss (VBL), which is characterized by a bounded variation ratio.
arXiv Detail & Related papers (2025-11-15T10:15:29Z) - The Vanishing Gradient Problem for Stiff Neural Differential Equations [3.941173292703699]
In stiff systems, it has been observed that sensitivities to parameters controlling fast-decaying modes become vanishingly small during training.<n>We show that this vanishing gradient phenomenon is not an artifact of any particular method, but a universal feature of all A-stable and L-stable stiff numerical integration schemes.
arXiv Detail & Related papers (2025-08-02T23:44:14Z) - Non-Asymptotic Stability and Consistency Guarantees for Physics-Informed Neural Networks via Coercive Operator Analysis [0.0]
We present a unified theoretical framework for analyzing the stability and consistency of Physics-Informed Neural Networks (PINNs)<n>PINNs approximate solutions to partial differential equations (PDEs) by minimizing residual losses over sampled collocation and boundary points.<n>We formalize both operator-level and variational notions of consistency, proving that residual minimization in Sobolev norms leads to convergence in energy and uniform norms under mild regularity.
arXiv Detail & Related papers (2025-06-16T14:41:15Z) - Fully data-driven inverse hyperelasticity with hyper-network neural ODE fields [0.0]
We propose a new framework for identifying mechanical properties of heterogeneous materials without a closed-form equation.<n>A physics-based data-driven method built upon ordinary neural differential equations (NODEs) is employed to discover equations.<n>The proposed approach is robust and general in identifying the mechanical properties of heterogeneous materials with very few assumptions.
arXiv Detail & Related papers (2025-06-09T18:50:14Z) - Stiff Transfer Learning for Physics-Informed Neural Networks [1.5361702135159845]
We propose a novel approach, stiff transfer learning for physics-informed neural networks (STL-PINNs) to tackle stiff ordinary differential equations (ODEs) and partial differential equations (PDEs)<n>Our methodology involves training a Multi-Head-PINN in a low-stiff regime, and obtaining the final solution in a high stiff regime by transfer learning.<n>This addresses the failure modes related to stiffness in PINNs while maintaining computational efficiency by computing "one-shot" solutions.
arXiv Detail & Related papers (2025-01-28T20:27:38Z) - Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - A Variational Bayesian Inference Theory of Elasticity and Its Mixed Probabilistic Finite Element Method for Inverse Deformation Solutions in Any Dimension [3.9900555221077396]
The elastic strain energy is used as a prior in a Bayesian inference network.
The proposed method is able to inversely predict continuum deformation mappings with strong discontinuity or fracture.
arXiv Detail & Related papers (2024-10-10T04:35:18Z) - Preconditioned FEM-based Neural Networks for Solving Incompressible Fluid Flows and Related Inverse Problems [41.94295877935867]
numerical simulation and optimization of technical systems described by partial differential equations is expensive.<n>A comparatively new approach in this context is to combine the good approximation properties of neural networks with the classical finite element method.<n>In this paper, we extend this approach to saddle-point and non-linear fluid dynamics problems, respectively.
arXiv Detail & Related papers (2024-09-06T07:17:01Z) - Solving Forward and Inverse Problems of Contact Mechanics using
Physics-Informed Neural Networks [0.0]
We deploy PINNs in a mixed-variable formulation enhanced by output transformation to enforce hard and soft constraints.
We show that PINNs can serve as pure partial equation (PDE) solver, as data-enhanced forward model, and as fast-to-evaluate surrogate model.
arXiv Detail & Related papers (2023-08-24T11:31:24Z) - Non-Parametric Learning of Stochastic Differential Equations with Non-asymptotic Fast Rates of Convergence [65.63201894457404]
We propose a novel non-parametric learning paradigm for the identification of drift and diffusion coefficients of non-linear differential equations.<n>The key idea essentially consists of fitting a RKHS-based approximation of the corresponding Fokker-Planck equation to such observations.
arXiv Detail & Related papers (2023-05-24T20:43:47Z) - A Robustness Analysis of Blind Source Separation [91.3755431537592]
Blind source separation (BSS) aims to recover an unobserved signal from its mixture $X=f(S)$ under the condition that the transformation $f$ is invertible but unknown.
We present a general framework for analysing such violations and quantifying their impact on the blind recovery of $S$ from $X$.
We show that a generic BSS-solution in response to general deviations from its defining structural assumptions can be profitably analysed in the form of explicit continuity guarantees.
arXiv Detail & Related papers (2023-03-17T16:30:51Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum
Cocoercive Variational Inequalities [137.6408511310322]
We consider the problem of finite-sum cocoercive variational inequalities.
For strongly monotone problems it is possible to achieve linear convergence to a solution using this method.
arXiv Detail & Related papers (2022-10-12T08:04:48Z) - Decimation technique for open quantum systems: a case study with
driven-dissipative bosonic chains [62.997667081978825]
Unavoidable coupling of quantum systems to external degrees of freedom leads to dissipative (non-unitary) dynamics.
We introduce a method to deal with these systems based on the calculation of (dissipative) lattice Green's function.
We illustrate the power of this method with several examples of driven-dissipative bosonic chains of increasing complexity.
arXiv Detail & Related papers (2022-02-15T19:00:09Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - On Convergence of Training Loss Without Reaching Stationary Points [62.41370821014218]
We show that Neural Network weight variables do not converge to stationary points where the gradient the loss function vanishes.
We propose a new perspective based on ergodic theory dynamical systems.
arXiv Detail & Related papers (2021-10-12T18:12:23Z) - Towards Understanding Generalization via Decomposing Excess Risk
Dynamics [13.4379473119565]
We analyze the generalization dynamics to derive algorithm-dependent bounds, e.g., stability.
Inspired by the observation that neural networks show a slow convergence rate when fitting noise, we propose decomposing the excess risk dynamics.
Under the decomposition framework, the new bound accords better with the theoretical and empirical evidence compared to the stability-based bound and uniform convergence bound.
arXiv Detail & Related papers (2021-06-11T03:42:45Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.