TensorBNN: Bayesian Inference for Neural Networks using Tensorflow
- URL: http://arxiv.org/abs/2009.14393v3
- Date: Mon, 11 Jul 2022 03:35:05 GMT
- Title: TensorBNN: Bayesian Inference for Neural Networks using Tensorflow
- Authors: Braden Kronheim, Michelle Kuchera, and Harrison Prosper
- Abstract summary: BNN is a new package based on that implements Bayesian inference for neural network models.
The posterior density of neural network model parameters is represented as a point cloud sampled using Hamiltonian Monte Carlo.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: TensorBNN is a new package based on TensorFlow that implements Bayesian
inference for modern neural network models. The posterior density of neural
network model parameters is represented as a point cloud sampled using
Hamiltonian Monte Carlo. The TensorBNN package leverages TensorFlow's
architecture and training features as well as its ability to use modern
graphics processing units (GPU) in both the training and prediction stages.
Related papers
- On Feynman--Kac training of partial Bayesian neural networks [1.6474447977095783]
Partial Bayesian neural networks (pBNNs) were shown to perform competitively with full Bayesian neural networks.
We propose an efficient sampling-based training strategy, wherein the training of a pBNN is formulated as simulating a Feynman--Kac model.
We show that our proposed training scheme outperforms the state of the art in terms of predictive performance.
arXiv Detail & Related papers (2023-10-30T15:03:15Z) - BayesFlow: Amortized Bayesian Workflows With Neural Networks [0.0]
This manuscript introduces the Python library BayesFlow for simulation-based training of established neural network architectures for amortized data compression and inference.
Amortized Bayesian inference, as implemented in BayesFlow, enables users to train custom neural networks on model simulations and re-use these networks for any subsequent application of the models.
arXiv Detail & Related papers (2023-06-28T08:41:49Z) - Set-based Neural Network Encoding Without Weight Tying [91.37161634310819]
We propose a neural network weight encoding method for network property prediction.
Our approach is capable of encoding neural networks in a model zoo of mixed architecture.
We introduce two new tasks for neural network property prediction: cross-dataset and cross-architecture.
arXiv Detail & Related papers (2023-05-26T04:34:28Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Neural Capacitance: A New Perspective of Neural Network Selection via
Edge Dynamics [85.31710759801705]
Current practice requires expensive computational costs in model training for performance prediction.
We propose a novel framework for neural network selection by analyzing the governing dynamics over synaptic connections (edges) during training.
Our framework is built on the fact that back-propagation during neural network training is equivalent to the dynamical evolution of synaptic connections.
arXiv Detail & Related papers (2022-01-11T20:53:15Z) - Kalman Bayesian Neural Networks for Closed-form Online Learning [5.220940151628734]
We propose a novel approach for BNN learning via closed-form Bayesian inference.
The calculation of the predictive distribution of the output and the update of the weight distribution are treated as Bayesian filtering and smoothing problems.
This allows closed-form expressions for training the network's parameters in a sequential/online fashion without gradient descent.
arXiv Detail & Related papers (2021-10-03T07:29:57Z) - Implementing graph neural networks with TensorFlow-Keras [1.6114012813668934]
Graph neural networks are a versatile machine learning architecture that received a lot of attention recently.
In this technical report, we present an implementation of convolution and pooling layers for Keras-Keras models.
arXiv Detail & Related papers (2021-03-07T10:46:02Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - Dynamic Bayesian Neural Networks [2.28438857884398]
We define an evolving in time neural network called a Hidden Markov neural network.
Weights of a feed-forward neural network are modelled with the hidden states of a Hidden Markov model.
A filtering algorithm is used to learn a variational approximation to the evolving in time posterior over the weights.
arXiv Detail & Related papers (2020-04-15T09:18:18Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.