Efficient Implementation of Non-linear Flow Law Using Neural Network
into the Abaqus Explicit FEM code
- URL: http://arxiv.org/abs/2209.03190v1
- Date: Wed, 7 Sep 2022 14:37:09 GMT
- Title: Efficient Implementation of Non-linear Flow Law Using Neural Network
into the Abaqus Explicit FEM code
- Authors: Olivier Pantal\'e and Pierre Tize Mha and Am\`evi Tongne
- Abstract summary: An Artificial Neural Network (ANN) model is used in a finite element formulation to define the flow law of a metallic material.
The results obtained show a very high capability of the ANN to replace the analytical formulation of a Johnson-Cook behavior law in a finite element code.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Machine learning techniques are increasingly used to predict material
behavior in scientific applications and offer a significant advantage over
conventional numerical methods. In this work, an Artificial Neural Network
(ANN) model is used in a finite element formulation to define the flow law of a
metallic material as a function of plastic strain, plastic strain rate and
temperature. First, we present the general structure of the neural network, its
operation and focus on the ability of the network to deduce, without prior
learning, the derivatives of the flow law with respect to the model inputs. In
order to validate the robustness and accuracy of the proposed model, we compare
and analyze the performance of several network architectures with respect to
the analytical formulation of a Johnson-Cook behavior law for a 42CrMo4 steel.
In a second part, after having selected an Artificial Neural Network
architecture with $2$ hidden layers, we present the implementation of this
model in the Abaqus Explicit computational code in the form of a VUHARD
subroutine. The predictive capability of the proposed model is then
demonstrated during the numerical simulation of two test cases: the necking of
a circular bar and a Taylor impact test. The results obtained show a very high
capability of the ANN to replace the analytical formulation of a Johnson-Cook
behavior law in a finite element code, while remaining competitive in terms of
numerical simulation time compared to a classical approach.
Related papers
- HANNA: Hard-constraint Neural Network for Consistent Activity Coefficient Prediction [16.024570580558954]
We present the first hard-constraint neural network for predicting activity coefficients (HANNA)
HANNA is a thermodynamic mixture property that is the basis for many applications in science and engineering.
The model was trained and evaluated on 317,421 data points for activity coefficients in binary mixtures from the Dortmund Data Bank.
arXiv Detail & Related papers (2024-07-25T13:05:00Z) - Physics-Informed Neural Networks with Hard Linear Equality Constraints [9.101849365688905]
This work proposes a novel physics-informed neural network, KKT-hPINN, which rigorously guarantees hard linear equality constraints.
Experiments on Aspen models of a stirred-tank reactor unit, an extractive distillation subsystem, and a chemical plant demonstrate that this model can further enhance the prediction accuracy.
arXiv Detail & Related papers (2024-02-11T17:40:26Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Gibbs-Duhem-Informed Neural Networks for Binary Activity Coefficient
Prediction [45.84205238554709]
We propose Gibbs-Duhem-informed neural networks for the prediction of binary activity coefficients at varying compositions.
We include the Gibbs-Duhem equation explicitly in the loss function for training neural networks.
arXiv Detail & Related papers (2023-05-31T07:36:45Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Neural net modeling of equilibria in NSTX-U [0.0]
We develop two neural networks relevant to equilibrium and shape control modeling.
Networks include Eqnet, a free-boundary equilibrium solver trained on the EFIT01 reconstruction algorithm, and Pertnet, which is trained on the Gspert code.
We report strong performance for both networks indicating that these models could reliably be used within closed-loop simulations.
arXiv Detail & Related papers (2022-02-28T16:09:58Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Liquid Time-constant Networks [117.57116214802504]
We introduce a new class of time-continuous recurrent neural network models.
Instead of declaring a learning system's dynamics by implicit nonlinearities, we construct networks of linear first-order dynamical systems.
These neural networks exhibit stable and bounded behavior, yield superior expressivity within the family of neural ordinary differential equations.
arXiv Detail & Related papers (2020-06-08T09:53:35Z) - Learning Queuing Networks by Recurrent Neural Networks [0.0]
We propose a machine-learning approach to derive performance models from data.
We exploit a deterministic approximation of their average dynamics in terms of a compact system of ordinary differential equations.
This allows for an interpretable structure of the neural network, which can be trained from system measurements to yield a white-box parameterized model.
arXiv Detail & Related papers (2020-02-25T10:56:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.