A peridynamic-informed deep learning model for brittle damage prediction
- URL: http://arxiv.org/abs/2310.01350v1
- Date: Mon, 2 Oct 2023 17:12:20 GMT
- Title: A peridynamic-informed deep learning model for brittle damage prediction
- Authors: Roozbeh Eghbalpoor, Azadeh Sheidaei
- Abstract summary: A novel approach that combines peridynamic (PD) theory with PINN is presented to predict quasi-static damage and crack propagation in brittle materials.
The proposed PD-INN is able to learn and capture intricate displacement patterns associated with different geometrical parameters.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this study, a novel approach that combines the principles of peridynamic
(PD) theory with PINN is presented to predict quasi-static damage and crack
propagation in brittle materials. To achieve high prediction accuracy and
convergence rate, the linearized PD governing equation is enforced in the
PINN's residual-based loss function. The proposed PD-INN is able to learn and
capture intricate displacement patterns associated with different geometrical
parameters, such as pre-crack position and length. Several enhancements like
cyclical annealing schedule and deformation gradient aware optimization
technique are proposed to ensure the model would not get stuck in its trivial
solution. The model's performance assessment is conducted by monitoring the
behavior of loss function throughout the training process. The PD-INN
predictions are also validated through several benchmark cases with the results
obtained from high-fidelity techniques such as PD direct numerical method and
Extended-Finite Element Method. Our results show the ability of the nonlocal
PD-INN to predict damage and crack propagation accurately and efficiently.
Related papers
- Outlier-aware Tensor Robust Principal Component Analysis with Self-guided Data Augmentation [21.981038455329013]
We propose a self-guided data augmentation approach that employs adaptive weighting to suppress outlier influence.
We show the improvements in both accuracy and computational efficiency compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-04-25T13:03:35Z) - Interpretable Deep Regression Models with Interval-Censored Failure Time Data [1.2993568435938014]
Deep learning methods for interval-censored data remain underexplored and limited to specific data type or model.
This work proposes a general regression framework for interval-censored data with a broad class of partially linear transformation models.
Applying our method to the Alzheimer's Disease Neuroimaging Initiative dataset yields novel insights and improved predictive performance compared to traditional approaches.
arXiv Detail & Related papers (2025-03-25T15:27:32Z) - Physics-Informed Neural Network Surrogate Models for River Stage Prediction [0.0]
PINNs can successfully approximate HEC-RAS numerical solutions when trained on a single river.
We evaluate the model's performance in terms of accuracy and computational speed.
arXiv Detail & Related papers (2025-03-21T04:48:22Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Model-Based Reparameterization Policy Gradient Methods: Theory and
Practical Algorithms [88.74308282658133]
Reization (RP) Policy Gradient Methods (PGMs) have been widely adopted for continuous control tasks in robotics and computer graphics.
Recent studies have revealed that, when applied to long-term reinforcement learning problems, model-based RP PGMs may experience chaotic and non-smooth optimization landscapes.
We propose a spectral normalization method to mitigate the exploding variance issue caused by long model unrolls.
arXiv Detail & Related papers (2023-10-30T18:43:21Z) - Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - PINN Training using Biobjective Optimization: The Trade-off between Data
Loss and Residual Loss [0.0]
Physics informed neural networks (PINNs) have proven to be an efficient tool to represent problems for which measured data are available.
In this paper, we suggest a multiobjective perspective on the training of PINNs by treating the data loss and the residual loss as two individual objective functions.
arXiv Detail & Related papers (2023-02-03T15:27:50Z) - Prediction intervals for neural network models using weighted asymmetric
loss functions [0.3093890460224435]
We propose a simple and efficient approach to generate a prediction intervals (PI) for approximated and forecasted trends.
Our method leverages a weighted asymmetric loss function to estimate the lower and upper bounds of the PI.
We show how it can be extended to derive PIs for parametrised functions and discuss its effectiveness when training deep neural networks.
arXiv Detail & Related papers (2022-10-09T18:58:24Z) - Digital-twin-enhanced metal tube bending forming real-time prediction
method based on Multi-source-input MTL [0.0]
The forming accuracy is seriously affected by the springback and other potential forming defects.
The existing methods are mainly conducted in offline space, ignoring the real-time information in the physical world.
This paper proposes a digital-twin-enhanced (DT-enhanced) metal tube bending forming real-time prediction method.
arXiv Detail & Related papers (2022-07-03T05:49:04Z) - Robust discovery of partial differential equations in complex situations [3.7314701799132686]
A robust deep learning-genetic algorithm (R-DLGA) that incorporates the physics-informed neural network (PINN) is proposed in this work.
The stability and accuracy of the proposed R-DLGA in several complex situations are examined for proof-and-concept.
Results prove that the proposed framework is able to calculate derivatives accurately with the optimization of PINN.
arXiv Detail & Related papers (2021-05-31T02:11:59Z) - Adaptive Degradation Process with Deep Learning-Driven Trajectory [5.060233857860902]
Remaining useful life (RUL) estimation is a crucial component in the implementation of intelligent predictive maintenance and health management.
This paper develops a hybrid DNN-based prognostic approach, where a Wiener-based-degradation model is enhanced with adaptive drift to characterize the system degradation.
An LSTM-CNN encoder-decoder is developed to predict future degradation trajectories by jointly learning noise coefficients as well as drift coefficients, and adaptive drift is updated via Bayesian inference.
arXiv Detail & Related papers (2021-03-22T06:00:42Z) - BERT Loses Patience: Fast and Robust Inference with Early Exit [91.26199404912019]
We propose Patience-based Early Exit as a plug-and-play technique to improve the efficiency and robustness of a pretrained language model.
Our approach improves inference efficiency as it allows the model to make a prediction with fewer layers.
arXiv Detail & Related papers (2020-06-07T13:38:32Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.