Thermodynamics-based Artificial Neural Networks for constitutive
modeling
- URL: http://arxiv.org/abs/2005.12183v1
- Date: Mon, 25 May 2020 15:56:34 GMT
- Title: Thermodynamics-based Artificial Neural Networks for constitutive
modeling
- Authors: Filippo Masi, Ioannis Stefanou, Paolo Vannucci, Victor Maffi-Berthier
- Abstract summary: We propose a new class of data-driven, physics-based, neural networks for modeling of strain rate independent processes at the material point level.
The two basic principles of thermodynamics are encoded in the network's architecture by taking advantage of automatic differentiation.
We demonstrate the wide applicability of TANNs for modeling elasto-plastic materials, with strain hardening and softening strain.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine Learning methods and, in particular, Artificial Neural Networks
(ANNs) have demonstrated promising capabilities in material constitutive
modeling. One of the main drawbacks of such approaches is the lack of a
rigorous frame based on the laws of physics. This may render physically
inconsistent the predictions of a trained network, which can be even dangerous
for real applications.
Here we propose a new class of data-driven, physics-based, neural networks
for constitutive modeling of strain rate independent processes at the material
point level, which we define as Thermodynamics-based Artificial Neural Networks
(TANNs). The two basic principles of thermodynamics are encoded in the
network's architecture by taking advantage of automatic differentiation to
compute the numerical derivatives of a network with respect to its inputs. In
this way, derivatives of the free-energy, the dissipation rate and their
relation with the stress and internal state variables are hardwired in the
network. Consequently, our network does not have to identify the underlying
pattern of thermodynamic laws during training, reducing the need of large
data-sets. Moreover the training is more efficient and robust, and the
predictions more accurate. Finally and more important, the predictions remain
thermodynamically consistent, even for unseen data. Based on these features,
TANNs are a starting point for data-driven, physics-based constitutive modeling
with neural networks.
We demonstrate the wide applicability of TANNs for modeling elasto-plastic
materials, with strain hardening and strain softening. Detailed comparisons
show that the predictions of TANNs outperform those of standard ANNs. TANNs '
architecture is general, enabling applications to materials with different or
more complex behavior, without any modification.
Related papers
- Knowledge-Based Convolutional Neural Network for the Simulation and Prediction of Two-Phase Darcy Flows [3.5707423185282656]
Physics-informed neural networks (PINNs) have gained significant prominence as a powerful tool in the field of scientific computing and simulations.
We propose to combine the power of neural networks with the dynamics imposed by the discretized differential equations.
By discretizing the governing equations, the PINN learns to account for the discontinuities and accurately capture the underlying relationships between inputs and outputs.
arXiv Detail & Related papers (2024-04-04T06:56:32Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Physics-Informed Neural Networks with Hard Linear Equality Constraints [9.101849365688905]
This work proposes a novel physics-informed neural network, KKT-hPINN, which rigorously guarantees hard linear equality constraints.
Experiments on Aspen models of a stirred-tank reactor unit, an extractive distillation subsystem, and a chemical plant demonstrate that this model can further enhance the prediction accuracy.
arXiv Detail & Related papers (2024-02-11T17:40:26Z) - Learning Neural Constitutive Laws From Motion Observations for
Generalizable PDE Dynamics [97.38308257547186]
Many NN approaches learn an end-to-end model that implicitly models both the governing PDE and material models.
We argue that the governing PDEs are often well-known and should be explicitly enforced rather than learned.
We introduce a new framework termed "Neural Constitutive Laws" (NCLaw) which utilizes a network architecture that strictly guarantees standard priors.
arXiv Detail & Related papers (2023-04-27T17:42:24Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - A new family of Constitutive Artificial Neural Networks towards
automated model discovery [0.0]
Neural Networks are powerful approximators that can learn function relations from large data without any knowledge of the underlying physics.
We show that Constive Neural Networks have potential paradigm shift in user-defined model selection to automated model discovery.
arXiv Detail & Related papers (2022-09-15T18:33:37Z) - A physics-informed deep neural network for surrogate modeling in
classical elasto-plasticity [0.0]
We present a deep neural network architecture that can efficiently approximate classical elasto-plastic relations.
The network is enriched with crucial physics aspects of classical elasto-plasticity, including additive decomposition of strains into elastic and plastic parts.
We show that embedding these physics into the architecture of the neural network facilitates a more efficient training of the network with less training data.
arXiv Detail & Related papers (2022-04-26T05:58:13Z) - EINNs: Epidemiologically-Informed Neural Networks [75.34199997857341]
We introduce a new class of physics-informed neural networks-EINN-crafted for epidemic forecasting.
We investigate how to leverage both the theoretical flexibility provided by mechanistic models as well as the data-driven expressability afforded by AI models.
arXiv Detail & Related papers (2022-02-21T18:59:03Z) - Constructing Neural Network-Based Models for Simulating Dynamical
Systems [59.0861954179401]
Data-driven modeling is an alternative paradigm that seeks to learn an approximation of the dynamics of a system using observations of the true system.
This paper provides a survey of the different ways to construct models of dynamical systems using neural networks.
In addition to the basic overview, we review the related literature and outline the most significant challenges from numerical simulations that this modeling paradigm must overcome.
arXiv Detail & Related papers (2021-11-02T10:51:42Z) - Thermodynamic Consistent Neural Networks for Learning Material
Interfacial Mechanics [6.087530833458481]
The traction-separation relations (TSR) quantitatively describe the mechanical behavior of a material interface undergoing openings.
A neural network can fit well along with the loading paths but often fails to obey the laws of physics.
We propose a thermodynamic consistent neural network (TCNN) approach to build a data-driven model of the TSR with sparse experimental data.
arXiv Detail & Related papers (2020-11-28T17:25:10Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.