NN-EUCLID: deep-learning hyperelasticity without stress data
- URL: http://arxiv.org/abs/2205.06664v1
- Date: Wed, 4 May 2022 13:54:54 GMT
- Title: NN-EUCLID: deep-learning hyperelasticity without stress data
- Authors: Prakash Thakolkaran, Akshay Joshi, Yiwen Zheng, Moritz Flaschel, Laura
De Lorenzis and Siddhant Kumar
- Abstract summary: We propose a new approach for unsupervised learning of hyperelastic laws with physics-consistent deep neural networks.
In contrast to supervised learning, which assumes the stress-strain, the approach only uses realistically measurable full-elastic field displacement and global force availability data.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new approach for unsupervised learning of hyperelastic
constitutive laws with physics-consistent deep neural networks. In contrast to
supervised learning, which assumes the availability of stress-strain pairs, the
approach only uses realistically measurable full-field displacement and global
reaction force data, thus it lies within the scope of our recent framework for
Efficient Unsupervised Constitutive Law Identification and Discovery (EUCLID)
and we denote it as NN-EUCLID. The absence of stress labels is compensated for
by leveraging a physics-motivated loss function based on the conservation of
linear momentum to guide the learning process. The constitutive model is based
on input-convex neural networks, which are capable of learning a function that
is convex with respect to its inputs. By employing a specially designed neural
network architecture, multiple physical and thermodynamic constraints for
hyperelastic constitutive laws, such as material frame indifference,
(poly-)convexity, and stress-free reference configuration are automatically
satisfied. We demonstrate the ability of the approach to accurately learn
several hidden isotropic and anisotropic hyperelastic constitutive laws -
including e.g., Mooney-Rivlin, Arruda-Boyce, Ogden, and Holzapfel models -
without using stress data. For anisotropic hyperelasticity, the unknown
anisotropic fiber directions are automatically discovered jointly with the
constitutive model. The neural network-based constitutive models show good
generalization capability beyond the strain states observed during training and
are readily deployable in a general finite element framework for simulating
complex mechanical boundary value problems with good accuracy.
Related papers
- Physics-Informed Neural Networks with Hard Linear Equality Constraints [9.101849365688905]
This work proposes a novel physics-informed neural network, KKT-hPINN, which rigorously guarantees hard linear equality constraints.
Experiments on Aspen models of a stirred-tank reactor unit, an extractive distillation subsystem, and a chemical plant demonstrate that this model can further enhance the prediction accuracy.
arXiv Detail & Related papers (2024-02-11T17:40:26Z) - Extreme sparsification of physics-augmented neural networks for
interpretable model discovery in mechanics [0.0]
We propose to train regularized physics-augmented neural network-based models utilizing a smoothed version of $L0$-regularization.
We show that the method can reliably obtain interpretable and trustworthy models for compressible and incompressible thermodynamicity, yield functions, and hardening models for elastoplasticity.
arXiv Detail & Related papers (2023-10-05T16:28:58Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - NeuralStagger: Accelerating Physics-constrained Neural PDE Solver with
Spatial-temporal Decomposition [67.46012350241969]
This paper proposes a general acceleration methodology called NeuralStagger.
It decomposing the original learning tasks into several coarser-resolution subtasks.
We demonstrate the successful application of NeuralStagger on 2D and 3D fluid dynamics simulations.
arXiv Detail & Related papers (2023-02-20T19:36:52Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - A new family of Constitutive Artificial Neural Networks towards
automated model discovery [0.0]
Neural Networks are powerful approximators that can learn function relations from large data without any knowledge of the underlying physics.
We show that Constive Neural Networks have potential paradigm shift in user-defined model selection to automated model discovery.
arXiv Detail & Related papers (2022-09-15T18:33:37Z) - Bayesian Physics-Informed Neural Networks for real-world nonlinear
dynamical systems [0.0]
We integrate data, physics, and uncertainties by combining neural networks, physics-informed modeling, and Bayesian inference.
Our study reveals the inherent advantages and disadvantages of Neural Networks, Bayesian Inference, and a combination of both.
We anticipate that the underlying concepts and trends generalize to more complex disease conditions.
arXiv Detail & Related papers (2022-05-12T19:04:31Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Physics informed neural networks for continuum micromechanics [68.8204255655161]
Recently, physics informed neural networks have successfully been applied to a broad variety of problems in applied mathematics and engineering.
Due to the global approximation, physics informed neural networks have difficulties in displaying localized effects and strong non-linear solutions by optimization.
It is shown, that the domain decomposition approach is able to accurately resolve nonlinear stress, displacement and energy fields in heterogeneous microstructures obtained from real-world $mu$CT-scans.
arXiv Detail & Related papers (2021-10-14T14:05:19Z) - Gradient Starvation: A Learning Proclivity in Neural Networks [97.02382916372594]
Gradient Starvation arises when cross-entropy loss is minimized by capturing only a subset of features relevant for the task.
This work provides a theoretical explanation for the emergence of such feature imbalance in neural networks.
arXiv Detail & Related papers (2020-11-18T18:52:08Z) - Tensor network approaches for learning non-linear dynamical laws [0.0]
We show that various physical constraints can be captured via tensor network based parameterizations for the governing equation.
We provide a physics-informed approach to recovering structured dynamical laws from data, which adaptively balances the need for expressivity and scalability.
arXiv Detail & Related papers (2020-02-27T19:02:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.