Bayesian Neural Networks at Scale: A Performance Analysis and Pruning
Study
- URL: http://arxiv.org/abs/2005.11619v2
- Date: Thu, 28 May 2020 23:18:54 GMT
- Title: Bayesian Neural Networks at Scale: A Performance Analysis and Pruning
Study
- Authors: Himanshu Sharma and Elise Jennings
- Abstract summary: This work explores the use of high performance computing with distributed training to address the challenges of training BNNs at scale.
We present a performance and scalability comparison of training the VGG-16 and Resnet-18 models on a Cray-XC40 cluster.
- Score: 2.3605348648054463
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Bayesian neural Networks (BNNs) are a promising method of obtaining
statistical uncertainties for neural network predictions but with a higher
computational overhead which can limit their practical usage. This work
explores the use of high performance computing with distributed training to
address the challenges of training BNNs at scale. We present a performance and
scalability comparison of training the VGG-16 and Resnet-18 models on a
Cray-XC40 cluster. We demonstrate that network pruning can speed up inference
without accuracy loss and provide an open source software package,
{\it{BPrune}} to automate this pruning. For certain models we find that pruning
up to 80\% of the network results in only a 7.0\% loss in accuracy. With the
development of new hardware accelerators for Deep Learning, BNNs are of
considerable interest for benchmarking performance. This analysis of training a
BNN at scale outlines the limitations and benefits compared to a conventional
neural network.
Related papers
- Auto-Train-Once: Controller Network Guided Automatic Network Pruning from Scratch [72.26822499434446]
Auto-Train-Once (ATO) is an innovative network pruning algorithm designed to automatically reduce the computational and storage costs of DNNs.
We provide a comprehensive convergence analysis as well as extensive experiments, and the results show that our approach achieves state-of-the-art performance across various model architectures.
arXiv Detail & Related papers (2024-03-21T02:33:37Z) - Bayesian Inference Accelerator for Spiking Neural Networks [3.145754107337963]
spiking neural networks (SNNs) have the potential to reduce computational area and power.
In this work, we demonstrate an optimization framework for developing and implementing efficient Bayesian SNNs in hardware.
We demonstrate accuracies comparable to Bayesian binary networks with full-precision Bernoulli parameters, while requiring up to $25times$ less spikes.
arXiv Detail & Related papers (2024-01-27T16:27:19Z) - CorrectNet: Robustness Enhancement of Analog In-Memory Computing for
Neural Networks by Error Suppression and Compensation [4.570841222958966]
We propose a framework to enhance the robustness of neural networks under variations and noise.
We show that inference accuracy of neural networks can be recovered from as low as 1.69% under variations and noise.
arXiv Detail & Related papers (2022-11-27T19:13:33Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Encoding the latent posterior of Bayesian Neural Networks for
uncertainty quantification [10.727102755903616]
We aim for efficient deep BNNs amenable to complex computer vision architectures.
We achieve this by leveraging variational autoencoders (VAEs) to learn the interaction and the latent distribution of the parameters at each network layer.
Our approach, Latent-Posterior BNN (LP-BNN), is compatible with the recent BatchEnsemble method, leading to highly efficient (in terms of computation and memory during both training and testing) ensembles.
arXiv Detail & Related papers (2020-12-04T19:50:09Z) - Growing Artificial Neural Networks [0.9475982252982436]
Pruning is a legitimate method for reducing the size of a neural network to fit in low SWaP hardware.
We propose an algorithm, Artificial Neurogenesis (ANG), that grows rather than prunes the network.
ANG accomplishes this by using the training data to determine critical connections between layers before the actual training takes place.
arXiv Detail & Related papers (2020-06-11T17:25:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.