Peri-Net-Pro: The neural processes with quantified uncertainty for crack
patterns
- URL: http://arxiv.org/abs/2005.13461v1
- Date: Sat, 23 May 2020 06:33:37 GMT
- Title: Peri-Net-Pro: The neural processes with quantified uncertainty for crack
patterns
- Authors: Moonseop Kim, Guang Lin
- Abstract summary: This paper uses the peridynamic theory to predict the crack patterns in a moving disk and classify them according to the modes.
Image classification and regression studies are conducted through Convolutional Neural Networks (CNNs) and the neural processes.
- Score: 8.591839265985417
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper uses the peridynamic theory, which is well-suited to crack
studies, to predict the crack patterns in a moving disk and classify them
according to the modes and finally perform regression analysis. In that way,
the crack patterns are obtained according to each mode by Molecular Dynamic
(MD) simulation using the peridynamics. Image classification and regression
studies are conducted through Convolutional Neural Networks (CNNs) and the
neural processes. First, we increased the amount and quality of the data using
peridynamics, which can theoretically compensate for the problems of the finite
element method (FEM) in generating crack pattern images. Second, we did the
case study for the PMB, LPS, and VES models that were obtained using the
peridynamic theory. Case studies were performed to classify the images using
CNNs and determine the PMB, LBS, and VES models' suitability. Finally, we
performed the regression analysis for the images of the crack patterns with
neural processes to predict the crack patterns. In the regression problem, by
representing the results of the variance according to the epochs, it can be
confirmed that the result of the variance is decreased by increasing the epoch
numbers through the neural processes. The most critical point of this study is
that the neural processes make an accurate prediction even if there are missing
or insufficient training data.
Related papers
- How Inverse Conditional Flows Can Serve as a Substitute for Distributional Regression [2.9873759776815527]
We propose a framework for distributional regression using inverse flow transformations (DRIFT)
DRIFT covers both interpretable statistical models and flexible neural networks opening up new avenues in both statistical modeling and deep learning.
arXiv Detail & Related papers (2024-05-08T21:19:18Z) - A Spectral Theory of Neural Prediction and Alignment [8.65717258105897]
We use a recent theoretical framework that relates the generalization error from regression to the spectral properties of the model and the target.
We test a large number of deep neural networks that predict visual cortical activity and show that there are multiple types of geometries that result in low neural prediction error as measured via regression.
arXiv Detail & Related papers (2023-09-22T12:24:06Z) - Analysis of Numerical Integration in RNN-Based Residuals for Fault
Diagnosis of Dynamic Systems [0.6999740786886536]
The paper includes a case study of a heavy-duty truck's after-treatment system to highlight the potential of these techniques for improving fault diagnosis performance.
Data-driven modeling and machine learning are widely used to model the behavior of dynamic systems.
arXiv Detail & Related papers (2023-05-08T12:48:18Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Dynamical Hyperspectral Unmixing with Variational Recurrent Neural
Networks [25.051918587650636]
Multitemporal hyperspectral unmixing (MTHU) is a fundamental tool in the analysis of hyperspectral image sequences.
We propose an unsupervised MTHU algorithm based on variational recurrent neural networks.
arXiv Detail & Related papers (2023-03-19T04:51:34Z) - A predictive physics-aware hybrid reduced order model for reacting flows [65.73506571113623]
A new hybrid predictive Reduced Order Model (ROM) is proposed to solve reacting flow problems.
The number of degrees of freedom is reduced from thousands of temporal points to a few POD modes with their corresponding temporal coefficients.
Two different deep learning architectures have been tested to predict the temporal coefficients.
arXiv Detail & Related papers (2023-01-24T08:39:20Z) - An advanced spatio-temporal convolutional recurrent neural network for
storm surge predictions [73.4962254843935]
We study the capability of artificial neural network models to emulate storm surge based on the storm track/size/intensity history.
This study presents a neural network model that can predict storm surge, informed by a database of synthetic storm simulations.
arXiv Detail & Related papers (2022-04-18T23:42:18Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Efficient hierarchical Bayesian inference for spatio-temporal regression
models in neuroimaging [6.512092052306553]
Examples include M/EEG inverse problems, encoding neural models for task-based fMRI analyses, and temperature monitoring schemes.
We devise a novel hierarchical flexible Bayesian framework within which the intrinsic-temporal dynamics of model parameters and noise are modeled.
arXiv Detail & Related papers (2021-11-02T15:50:01Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.