A peridynamic-informed deep learning model for brittle damage prediction
- URL: http://arxiv.org/abs/2310.01350v1
- Date: Mon, 2 Oct 2023 17:12:20 GMT
- Title: A peridynamic-informed deep learning model for brittle damage prediction
- Authors: Roozbeh Eghbalpoor, Azadeh Sheidaei
- Abstract summary: A novel approach that combines peridynamic (PD) theory with PINN is presented to predict quasi-static damage and crack propagation in brittle materials.
The proposed PD-INN is able to learn and capture intricate displacement patterns associated with different geometrical parameters.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, a novel approach that combines the principles of peridynamic
(PD) theory with PINN is presented to predict quasi-static damage and crack
propagation in brittle materials. To achieve high prediction accuracy and
convergence rate, the linearized PD governing equation is enforced in the
PINN's residual-based loss function. The proposed PD-INN is able to learn and
capture intricate displacement patterns associated with different geometrical
parameters, such as pre-crack position and length. Several enhancements like
cyclical annealing schedule and deformation gradient aware optimization
technique are proposed to ensure the model would not get stuck in its trivial
solution. The model's performance assessment is conducted by monitoring the
behavior of loss function throughout the training process. The PD-INN
predictions are also validated through several benchmark cases with the results
obtained from high-fidelity techniques such as PD direct numerical method and
Extended-Finite Element Method. Our results show the ability of the nonlocal
PD-INN to predict damage and crack propagation accurately and efficiently.
Related papers
- A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Model-Based Reparameterization Policy Gradient Methods: Theory and
Practical Algorithms [88.74308282658133]
Reization (RP) Policy Gradient Methods (PGMs) have been widely adopted for continuous control tasks in robotics and computer graphics.
Recent studies have revealed that, when applied to long-term reinforcement learning problems, model-based RP PGMs may experience chaotic and non-smooth optimization landscapes.
We propose a spectral normalization method to mitigate the exploding variance issue caused by long model unrolls.
arXiv Detail & Related papers (2023-10-30T18:43:21Z) - Debias the Training of Diffusion Models [53.49637348771626]
We provide theoretical evidence that the prevailing practice of using a constant loss weight strategy in diffusion models leads to biased estimation during the training phase.
We propose an elegant and effective weighting strategy grounded in the theoretically unbiased principle.
These analyses are expected to advance our understanding and demystify the inner workings of diffusion models.
arXiv Detail & Related papers (2023-10-12T16:04:41Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - PINN Training using Biobjective Optimization: The Trade-off between Data
Loss and Residual Loss [0.0]
Physics informed neural networks (PINNs) have proven to be an efficient tool to represent problems for which measured data are available.
In this paper, we suggest a multiobjective perspective on the training of PINNs by treating the data loss and the residual loss as two individual objective functions.
arXiv Detail & Related papers (2023-02-03T15:27:50Z) - Prediction intervals for neural network models using weighted asymmetric
loss functions [0.3093890460224435]
We propose a simple and efficient approach to generate a prediction intervals (PI) for approximated and forecasted trends.
Our method leverages a weighted asymmetric loss function to estimate the lower and upper bounds of the PI.
We show how it can be extended to derive PIs for parametrised functions and discuss its effectiveness when training deep neural networks.
arXiv Detail & Related papers (2022-10-09T18:58:24Z) - Digital-twin-enhanced metal tube bending forming real-time prediction
method based on Multi-source-input MTL [0.0]
The forming accuracy is seriously affected by the springback and other potential forming defects.
The existing methods are mainly conducted in offline space, ignoring the real-time information in the physical world.
This paper proposes a digital-twin-enhanced (DT-enhanced) metal tube bending forming real-time prediction method.
arXiv Detail & Related papers (2022-07-03T05:49:04Z) - Robust discovery of partial differential equations in complex situations [3.7314701799132686]
A robust deep learning-genetic algorithm (R-DLGA) that incorporates the physics-informed neural network (PINN) is proposed in this work.
The stability and accuracy of the proposed R-DLGA in several complex situations are examined for proof-and-concept.
Results prove that the proposed framework is able to calculate derivatives accurately with the optimization of PINN.
arXiv Detail & Related papers (2021-05-31T02:11:59Z) - Adaptive Degradation Process with Deep Learning-Driven Trajectory [5.060233857860902]
Remaining useful life (RUL) estimation is a crucial component in the implementation of intelligent predictive maintenance and health management.
This paper develops a hybrid DNN-based prognostic approach, where a Wiener-based-degradation model is enhanced with adaptive drift to characterize the system degradation.
An LSTM-CNN encoder-decoder is developed to predict future degradation trajectories by jointly learning noise coefficients as well as drift coefficients, and adaptive drift is updated via Bayesian inference.
arXiv Detail & Related papers (2021-03-22T06:00:42Z) - BERT Loses Patience: Fast and Robust Inference with Early Exit [91.26199404912019]
We propose Patience-based Early Exit as a plug-and-play technique to improve the efficiency and robustness of a pretrained language model.
Our approach improves inference efficiency as it allows the model to make a prediction with fewer layers.
arXiv Detail & Related papers (2020-06-07T13:38:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.