Global inducing point variational posteriors for Bayesian neural
networks and deep Gaussian processes
- URL: http://arxiv.org/abs/2005.08140v5
- Date: Tue, 22 Jun 2021 13:39:01 GMT
- Title: Global inducing point variational posteriors for Bayesian neural
networks and deep Gaussian processes
- Authors: Sebastian W. Ober, Laurence Aitchison
- Abstract summary: We develop a correlated approximate posterior over the weights at all layers in a Bayesian neural network.
We extend this approach to deep Gaussian processes, unifying inference in the two model classes.
- Score: 38.79834570417554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the optimal approximate posterior over the top-layer weights in a
Bayesian neural network for regression, and show that it exhibits strong
dependencies on the lower-layer weights. We adapt this result to develop a
correlated approximate posterior over the weights at all layers in a Bayesian
neural network. We extend this approach to deep Gaussian processes, unifying
inference in the two model classes. Our approximate posterior uses learned
"global" inducing points, which are defined only at the input layer and
propagated through the network to obtain inducing inputs at subsequent layers.
By contrast, standard, "local", inducing point methods from the deep Gaussian
process literature optimise a separate set of inducing inputs at every layer,
and thus do not model correlations across layers. Our method gives
state-of-the-art performance for a variational Bayesian method, without data
augmentation or tempering, on CIFAR-10 of 86.7%, which is comparable to SGMCMC
without tempering but with data augmentation (88% in Wenzel et al. 2020).
Related papers
- BALI: Learning Neural Networks via Bayesian Layerwise Inference [6.7819070167076045]
We introduce a new method for learning Bayesian neural networks, treating them as a stack of multivariate Bayesian linear regression models.
The main idea is to infer the layerwise posterior exactly if we know the target outputs of each layer.
We define these pseudo-targets as the layer outputs from the forward pass, updated by the backpropagated of the objective function.
arXiv Detail & Related papers (2024-11-18T22:18:34Z) - Posterior and variational inference for deep neural networks with heavy-tailed weights [0.0]
We consider deep neural networks in a Bayesian framework with a prior distribution sampling the network weights at random.
We show that the corresponding posterior distribution achieves near-optimal minimax contraction rates.
We also provide variational Bayes counterparts of the results, that show that mean-field variational approximations still benefit from near-optimal theoretical support.
arXiv Detail & Related papers (2024-06-05T15:24:20Z) - Scalable Bayesian Inference in the Era of Deep Learning: From Gaussian Processes to Deep Neural Networks [0.5827521884806072]
Large neural networks trained on large datasets have become the dominant paradigm in machine learning.
This thesis develops scalable methods to equip neural networks with model uncertainty.
arXiv Detail & Related papers (2024-04-29T23:38:58Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - WLD-Reg: A Data-dependent Within-layer Diversity Regularizer [98.78384185493624]
Neural networks are composed of multiple layers arranged in a hierarchical structure jointly trained with a gradient-based optimization.
We propose to complement this traditional 'between-layer' feedback with additional 'within-layer' feedback to encourage the diversity of the activations within the same layer.
We present an extensive empirical study confirming that the proposed approach enhances the performance of several state-of-the-art neural network models in multiple tasks.
arXiv Detail & Related papers (2023-01-03T20:57:22Z) - BiTAT: Neural Network Binarization with Task-dependent Aggregated
Transformation [116.26521375592759]
Quantization aims to transform high-precision weights and activations of a given neural network into low-precision weights/activations for reduced memory usage and computation.
Extreme quantization (1-bit weight/1-bit activations) of compactly-designed backbone architectures results in severe performance degeneration.
This paper proposes a novel Quantization-Aware Training (QAT) method that can effectively alleviate performance degeneration.
arXiv Detail & Related papers (2022-07-04T13:25:49Z) - Optimization-Based Separations for Neural Networks [57.875347246373956]
We show that gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations.
This is the first optimization-based separation result where the approximation benefits of the stronger architecture provably manifest in practice.
arXiv Detail & Related papers (2021-12-04T18:07:47Z) - Scale Mixtures of Neural Network Gaussian Processes [22.07524388784668]
We introduce a scale mixture of $mathrmNNGP$ for which we introduce a prior on the scale of the last-layer parameters.
We show that with certain scale priors, we obtain heavytailed processes, and we recover Student's $t$ processes in the case of inverse gamma distributions.
We further analyze the neural networks with our prior setting and trained with gradient descents and obtain similar results as for $mathrmNNGP$.
arXiv Detail & Related papers (2021-07-03T11:02:18Z) - Infinitely Deep Bayesian Neural Networks with Stochastic Differential
Equations [37.02511585732081]
We perform scalable approximate inference in a recently-proposed family of continuous-depth neural networks.
We demonstrate gradient-based variational inference, producing arbitrarily-flexible approximate posteriors.
This approach further inherits the memory-efficient training and tunable precision of neural ODEs.
arXiv Detail & Related papers (2021-02-12T14:48:58Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.