Locking-Free Training of Physics-Informed Neural Network for Solving Nearly Incompressible Elasticity Equations
- URL: http://arxiv.org/abs/2505.21994v1
- Date: Wed, 28 May 2025 05:52:03 GMT
- Title: Locking-Free Training of Physics-Informed Neural Network for Solving Nearly Incompressible Elasticity Equations
- Authors: Josef Dick, Seungchan Ko, Kassem Mustapha, Sanghyeon Park,
- Abstract summary: Due to divergence instability, the accuracy of low-order conforming finite element methods for nearly incompressible homogeneous elasticity equations deteriorates.<n>We propose a robust method based on a fundamentally different, machine-learning-driven approach.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to divergence instability, the accuracy of low-order conforming finite element methods for nearly incompressible homogeneous elasticity equations deteriorates as the Lam\'e coefficient $\lambda\to\infty$, or equivalently as the Poisson ratio $\nu\to1/2$. This phenomenon, known as locking or non-robustness, remains not fully understood despite extensive investigation. In this paper, we propose a robust method based on a fundamentally different, machine-learning-driven approach. Leveraging recently developed Physics-Informed Neural Networks (PINNs), we address the numerical solution of linear elasticity equations governing nearly incompressible materials. The core idea of our method is to appropriately decompose the given equations to alleviate the extreme imbalance in the coefficients, while simultaneously solving both the forward and inverse problems to recover the solutions of the decomposed systems as well as the associated external conditions. Through various numerical experiments, including constant, variable and parametric Lam\'e coefficients, we illustrate the efficiency of the proposed methodology.
Related papers
- The Vanishing Gradient Problem for Stiff Neural Differential Equations [3.941173292703699]
In stiff systems, it has been observed that sensitivities to parameters controlling fast-decaying modes become vanishingly small during training.<n>We show that this vanishing gradient phenomenon is not an artifact of any particular method, but a universal feature of all A-stable and L-stable stiff numerical integration schemes.
arXiv Detail & Related papers (2025-08-02T23:44:14Z) - Fully data-driven inverse hyperelasticity with hyper-network neural ODE fields [0.0]
We propose a new framework for identifying mechanical properties of heterogeneous materials without a closed-form equation.<n>A physics-based data-driven method built upon ordinary neural differential equations (NODEs) is employed to discover equations.<n>The proposed approach is robust and general in identifying the mechanical properties of heterogeneous materials with very few assumptions.
arXiv Detail & Related papers (2025-06-09T18:50:14Z) - Stiff Transfer Learning for Physics-Informed Neural Networks [1.5361702135159845]
We propose a novel approach, stiff transfer learning for physics-informed neural networks (STL-PINNs) to tackle stiff ordinary differential equations (ODEs) and partial differential equations (PDEs)<n>Our methodology involves training a Multi-Head-PINN in a low-stiff regime, and obtaining the final solution in a high stiff regime by transfer learning.<n>This addresses the failure modes related to stiffness in PINNs while maintaining computational efficiency by computing "one-shot" solutions.
arXiv Detail & Related papers (2025-01-28T20:27:38Z) - Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - A Variational Bayesian Inference Theory of Elasticity and Its Mixed Probabilistic Finite Element Method for Inverse Deformation Solutions in Any Dimension [3.9900555221077396]
The elastic strain energy is used as a prior in a Bayesian inference network.
The proposed method is able to inversely predict continuum deformation mappings with strong discontinuity or fracture.
arXiv Detail & Related papers (2024-10-10T04:35:18Z) - Preconditioned FEM-based Neural Networks for Solving Incompressible Fluid Flows and Related Inverse Problems [41.94295877935867]
numerical simulation and optimization of technical systems described by partial differential equations is expensive.<n>A comparatively new approach in this context is to combine the good approximation properties of neural networks with the classical finite element method.<n>In this paper, we extend this approach to saddle-point and non-linear fluid dynamics problems, respectively.
arXiv Detail & Related papers (2024-09-06T07:17:01Z) - Non-Parametric Learning of Stochastic Differential Equations with Non-asymptotic Fast Rates of Convergence [65.63201894457404]
We propose a novel non-parametric learning paradigm for the identification of drift and diffusion coefficients of non-linear differential equations.<n>The key idea essentially consists of fitting a RKHS-based approximation of the corresponding Fokker-Planck equation to such observations.
arXiv Detail & Related papers (2023-05-24T20:43:47Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum
Cocoercive Variational Inequalities [137.6408511310322]
We consider the problem of finite-sum cocoercive variational inequalities.
For strongly monotone problems it is possible to achieve linear convergence to a solution using this method.
arXiv Detail & Related papers (2022-10-12T08:04:48Z) - Decimation technique for open quantum systems: a case study with
driven-dissipative bosonic chains [62.997667081978825]
Unavoidable coupling of quantum systems to external degrees of freedom leads to dissipative (non-unitary) dynamics.
We introduce a method to deal with these systems based on the calculation of (dissipative) lattice Green's function.
We illustrate the power of this method with several examples of driven-dissipative bosonic chains of increasing complexity.
arXiv Detail & Related papers (2022-02-15T19:00:09Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.