A machine learning based plasticity model using proper orthogonal
decomposition
- URL: http://arxiv.org/abs/2001.03438v1
- Date: Tue, 7 Jan 2020 15:46:16 GMT
- Title: A machine learning based plasticity model using proper orthogonal
decomposition
- Authors: Dengpeng Huang, Jan Niklas Fuhg, Christian Wei{\ss}enfels, Peter
Wriggers
- Abstract summary: Data-driven material models have many advantages over classical numerical approaches.
One approach to develop a data-driven material model is to use machine learning tools.
A machine learning based material modelling framework is proposed for both elasticity and plasticity.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-driven material models have many advantages over classical numerical
approaches, such as the direct utilization of experimental data and the
possibility to improve performance of predictions when additional data is
available. One approach to develop a data-driven material model is to use
machine learning tools. These can be trained offline to fit an observed
material behaviour and then be applied in online applications. However,
learning and predicting history dependent material models, such as plasticity,
is still challenging. In this work, a machine learning based material modelling
framework is proposed for both elasticity and plasticity. The machine learning
based hyperelasticity model is developed with the Feed forward Neural Network
(FNN) directly whereas the machine learning based plasticity model is developed
by using of a novel method called Proper Orthogonal Decomposition Feed forward
Neural Network (PODFNN). In order to account for the loading history, the
accumulated absolute strain is proposed to be the history variable of the
plasticity model. Additionally, the strain-stress sequence data for plasticity
is collected from different loading-unloading paths based on the concept of
sequence for plasticity. By means of the POD, the multi-dimensional stress
sequence is decoupled leading to independent one dimensional coefficient
sequences. In this case, the neural network with multiple output is replaced by
multiple independent neural networks each possessing a one-dimensional output,
which leads to less training time and better training performance. To apply the
machine learning based material model in finite element analysis, the tangent
matrix is derived by the automatic symbolic differentiation tool AceGen. The
effectiveness and generalization of the presented models are investigated by a
series of numerical examples using both 2D and 3D finite element analysis.
Related papers
- Discovering Interpretable Physical Models using Symbolic Regression and
Discrete Exterior Calculus [55.2480439325792]
We propose a framework that combines Symbolic Regression (SR) and Discrete Exterior Calculus (DEC) for the automated discovery of physical models.
DEC provides building blocks for the discrete analogue of field theories, which are beyond the state-of-the-art applications of SR to physical problems.
We prove the effectiveness of our methodology by re-discovering three models of Continuum Physics from synthetic experimental data.
arXiv Detail & Related papers (2023-10-10T13:23:05Z) - Learning Latent Dynamics via Invariant Decomposition and
(Spatio-)Temporal Transformers [0.6767885381740952]
We propose a method for learning dynamical systems from high-dimensional empirical data.
We focus on the setting in which data are available from multiple different instances of a system.
We study behaviour through simple theoretical analyses and extensive experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2023-06-21T07:52:07Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Fitting a Directional Microstructure Model to Diffusion-Relaxation MRI
Data with Self-Supervised Machine Learning [2.8167227950959206]
Self-supervised machine learning is emerging as an attractive alternative to supervised learning.
In this paper, we demonstrate self-supervised machine learning model fitting for a directional microstructural model.
Our approach shows clear improvements in parameter estimation and computational time, compared to standard non-linear least squares fitting.
arXiv Detail & Related papers (2022-10-05T15:51:39Z) - Thermodynamically Consistent Machine-Learned Internal State Variable
Approach for Data-Driven Modeling of Path-Dependent Materials [0.76146285961466]
Data-driven machine learning models, such as deep neural networks and recurrent neural networks (RNNs), have become viable alternatives.
This study proposes a machine-learned data robustness-driven modeling approach for path-dependent materials based on the measurable material.
arXiv Detail & Related papers (2022-05-01T23:25:08Z) - Capturing Actionable Dynamics with Structured Latent Ordinary
Differential Equations [68.62843292346813]
We propose a structured latent ODE model that captures system input variations within its latent representation.
Building on a static variable specification, our model learns factors of variation for each input to the system, thus separating the effects of the system inputs in the latent space.
arXiv Detail & Related papers (2022-02-25T20:00:56Z) - Learning continuous models for continuous physics [94.42705784823997]
We develop a test based on numerical analysis theory to validate machine learning models for science and engineering applications.
Our results illustrate how principled numerical analysis methods can be coupled with existing ML training/testing methodologies to validate models for science and engineering applications.
arXiv Detail & Related papers (2022-02-17T07:56:46Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Using Data Assimilation to Train a Hybrid Forecast System that Combines
Machine-Learning and Knowledge-Based Components [52.77024349608834]
We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data is noisy partial measurements.
We show that by using partial measurements of the state of the dynamical system, we can train a machine learning model to improve predictions made by an imperfect knowledge-based model.
arXiv Detail & Related papers (2021-02-15T19:56:48Z) - Physics-based polynomial neural networks for one-shot learning of
dynamical systems from one or a few samples [0.0]
The paper describes practical results on both a simple pendulum and one of the largest worldwide X-ray source.
It is demonstrated in practice that the proposed approach allows recovering complex physics from noisy, limited, and partial observations.
arXiv Detail & Related papers (2020-05-24T09:27:10Z) - Learning Queuing Networks by Recurrent Neural Networks [0.0]
We propose a machine-learning approach to derive performance models from data.
We exploit a deterministic approximation of their average dynamics in terms of a compact system of ordinary differential equations.
This allows for an interpretable structure of the neural network, which can be trained from system measurements to yield a white-box parameterized model.
arXiv Detail & Related papers (2020-02-25T10:56:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.