A Bayesian regularization-backpropagation neural network model for
peeling computations
- URL: http://arxiv.org/abs/2006.16409v3
- Date: Mon, 24 Jan 2022 12:54:00 GMT
- Title: A Bayesian regularization-backpropagation neural network model for
peeling computations
- Authors: Saipraneeth Gouravaraju, Jyotindra Narayan, Roger A. Sauer, Sachin
Singh Gautam
- Abstract summary: The input data is taken from finite element (FE) peeling results.
The neural network is trained with 75% of the FE dataset.
It is shown that the BR-BPNN model in conjunction with k-fold technique has significant potential to estimate the peeling behavior.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian regularization-backpropagation neural network (BR-BPNN) model is
employed to predict some aspects of the gecko spatula peeling viz. the
variation of the maximum normal and tangential pull-off forces and the
resultant force angle at detachment with the peeling angle. K-fold cross
validation is used to improve the effectiveness of the model. The input data is
taken from finite element (FE) peeling results. The neural network is trained
with 75% of the FE dataset. The remaining 25% are utilized to predict the
peeling behavior. The training performance is evaluated for every change in the
number of hidden layer neurons to determine the optimal network structure. The
relative error is calculated to draw a clear comparison between predicted and
FE results. It is shown that the BR-BPNN model in conjunction with k-fold
technique has significant potential to estimate the peeling behavior.
Related papers
- Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Convolutional neural networks for valid and efficient causal inference [1.5469452301122177]
Convolutional neural networks (CNN) have been successful in machine learning applications.
We consider the use of CNN to fit nuisance models in semiparametric estimation of the average causal effect of a treatment.
We give results on a study of the effect of early retirement on hospitalization using data covering the whole Swedish population.
arXiv Detail & Related papers (2023-01-27T14:16:55Z) - Bayesian Layer Graph Convolutioanl Network for Hyperspetral Image
Classification [24.91896527342631]
Graph convolutional network (GCN) based models have shown impressive performance.
Deep learning frameworks based on point estimation suffer from low generalization and inability to quantify the classification results uncertainty.
In this paper, we propose a Bayesian layer with Bayesian idea as an insertion layer into point estimation based neural networks.
A Generative Adversarial Network (GAN) is built to solve the sample imbalance problem of HSI dataset.
arXiv Detail & Related papers (2022-11-14T12:56:56Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - Asymptotic Properties for Bayesian Neural Network in Besov Space [1.90365714903665]
We show that the Bayesian neural network using spike-and-slab prior consistency has nearly minimax convergence rate when the true regression function is in the Besov space.
We propose a practical neural network with guaranteed properties.
arXiv Detail & Related papers (2022-06-01T05:47:06Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - Kalman Bayesian Neural Networks for Closed-form Online Learning [5.220940151628734]
We propose a novel approach for BNN learning via closed-form Bayesian inference.
The calculation of the predictive distribution of the output and the update of the weight distribution are treated as Bayesian filtering and smoothing problems.
This allows closed-form expressions for training the network's parameters in a sequential/online fashion without gradient descent.
arXiv Detail & Related papers (2021-10-03T07:29:57Z) - Towards an Understanding of Benign Overfitting in Neural Networks [104.2956323934544]
Modern machine learning models often employ a huge number of parameters and are typically optimized to have zero training loss.
We examine how these benign overfitting phenomena occur in a two-layer neural network setting.
We show that it is possible for the two-layer ReLU network interpolator to achieve a near minimax-optimal learning rate.
arXiv Detail & Related papers (2021-06-06T19:08:53Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - A Bayesian Perspective on Training Speed and Model Selection [51.15664724311443]
We show that a measure of a model's training speed can be used to estimate its marginal likelihood.
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.
Our results suggest a promising new direction towards explaining why neural networks trained with gradient descent are biased towards functions that generalize well.
arXiv Detail & Related papers (2020-10-27T17:56:14Z) - Bayesian Neural Network via Stochastic Gradient Descent [0.0]
We show how gradient estimation can be applied on bayesian neural networks by gradient estimation techniques.
Our work considerably beats the previous state of the art approaches for regression using bayesian neural networks.
arXiv Detail & Related papers (2020-06-04T18:33:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.