A heteroencoder architecture for prediction of failure locations in
porous metals using variational inference
- URL: http://arxiv.org/abs/2202.00078v1
- Date: Mon, 31 Jan 2022 20:26:53 GMT
- Title: A heteroencoder architecture for prediction of failure locations in
porous metals using variational inference
- Authors: Wyatt Bridgman, Xiaoxuan Zhang, Greg Teichert, Mohammad Khalil,
Krishna Garikipati, Reese Jones
- Abstract summary: We employ an encoder-decoder convolutional neural network to predict the failure locations of porous metal tension specimens.
The objective of predicting failure locations presents an extreme case of class imbalance since most of the material in the specimens do not fail.
We demonstrate that the resulting predicted variances are effective in ranking the locations that are most likely to fail in any given specimen.
- Score: 1.2722697496405462
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work we employ an encoder-decoder convolutional neural network to
predict the failure locations of porous metal tension specimens based only on
their initial porosities. The process we model is complex, with a progression
from initial void nucleation, to saturation, and ultimately failure. The
objective of predicting failure locations presents an extreme case of class
imbalance since most of the material in the specimens do not fail. In response
to this challenge, we develop and demonstrate the effectiveness of data- and
loss-based regularization methods. Since there is considerable sensitivity of
the failure location to the particular configuration of voids, we also use
variational inference to provide uncertainties for the neural network
predictions. We connect the deterministic and Bayesian convolutional neural
networks at a theoretical level to explain how variational inference
regularizes the training and predictions. We demonstrate that the resulting
predicted variances are effective in ranking the locations that are most likely
to fail in any given specimen.
Related papers
- Structured Radial Basis Function Network: Modelling Diversity for
Multiple Hypotheses Prediction [51.82628081279621]
Multi-modal regression is important in forecasting nonstationary processes or with a complex mixture of distributions.
A Structured Radial Basis Function Network is presented as an ensemble of multiple hypotheses predictors for regression problems.
It is proved that this structured model can efficiently interpolate this tessellation and approximate the multiple hypotheses target distribution.
arXiv Detail & Related papers (2023-09-02T01:27:53Z) - GIT: Detecting Uncertainty, Out-Of-Distribution and Adversarial Samples
using Gradients and Invariance Transformations [77.34726150561087]
We propose a holistic approach for the detection of generalization errors in deep neural networks.
GIT combines the usage of gradient information and invariance transformations.
Our experiments demonstrate the superior performance of GIT compared to the state-of-the-art on a variety of network architectures.
arXiv Detail & Related papers (2023-07-05T22:04:38Z) - Single Model Uncertainty Estimation via Stochastic Data Centering [39.71621297447397]
We are interested in estimating the uncertainties of deep neural networks.
We present a striking new finding that an ensemble of neural networks with the same weight initialization, trained on datasets that are shifted by a constant bias gives rise to slightly inconsistent trained models.
We show that $Delta-$UQ's uncertainty estimates are superior to many of the current methods on a variety of benchmarks.
arXiv Detail & Related papers (2022-07-14T23:54:54Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Bayesian Neural Networks for Reversible Steganography [0.7614628596146599]
We propose to consider uncertainty in predictive models based upon a theoretical framework of Bayesian deep learning.
We approximate the posterior predictive distribution through Monte Carlo sampling with reversible forward passes.
We show that predictive uncertainty can be disentangled into aleatoric uncertainties and these quantities can be learnt in an unsupervised manner.
arXiv Detail & Related papers (2022-01-07T14:56:33Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Regularizing Class-wise Predictions via Self-knowledge Distillation [80.76254453115766]
We propose a new regularization method that penalizes the predictive distribution between similar samples.
This results in regularizing the dark knowledge (i.e., the knowledge on wrong predictions) of a single network.
Our experimental results on various image classification tasks demonstrate that the simple yet powerful method can significantly improve the generalization ability.
arXiv Detail & Related papers (2020-03-31T06:03:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.