Hands-on Bayesian Neural Networks -- a Tutorial for Deep Learning Users
- URL: http://arxiv.org/abs/2007.06823v3
- Date: Mon, 3 Jan 2022 08:37:30 GMT
- Title: Hands-on Bayesian Neural Networks -- a Tutorial for Deep Learning Users
- Authors: Laurent Valentin Jospin and Wray Buntine and Farid Boussaid and Hamid
Laga and Mohammed Bennamoun
- Abstract summary: Bayesian statistics offer a formalism to understand and quantify the uncertainty associated with deep neural network predictions.
This tutorial provides an overview of the relevant literature and a complete toolset to design, implement train, use and evaluate Bayesian Neural Networks.
- Score: 27.764388500937983
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern deep learning methods constitute incredibly powerful tools to tackle a
myriad of challenging problems. However, since deep learning methods operate as
black boxes, the uncertainty associated with their predictions is often
challenging to quantify. Bayesian statistics offer a formalism to understand
and quantify the uncertainty associated with deep neural network predictions.
This tutorial provides an overview of the relevant literature and a complete
toolset to design, implement, train, use and evaluate Bayesian Neural Networks,
i.e. Stochastic Artificial Neural Networks trained using Bayesian methods.
Related papers
- Generalized Uncertainty of Deep Neural Networks: Taxonomy and
Applications [1.9671123873378717]
We show that the uncertainty of deep neural networks is not only important in a sense of interpretability and transparency, but also crucial in further advancing their performance.
We will generalize the definition of the uncertainty of deep neural networks to any number or vector that is associated with an input or an input-label pair, and catalog existing methods on mining'' such uncertainty from a deep model.
arXiv Detail & Related papers (2023-02-02T22:02:33Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Bayesian Learning for Neural Networks: an algorithmic survey [95.42181254494287]
This self-contained survey engages and introduces readers to the principles and algorithms of Bayesian Learning for Neural Networks.
It provides an introduction to the topic from an accessible, practical-algorithmic perspective.
arXiv Detail & Related papers (2022-11-21T21:36:58Z) - Uncertainty Quantification and Resource-Demanding Computer Vision
Applications of Deep Learning [5.130440339897478]
Bringing deep neural networks (DNNs) into safety critical applications requires a thorough treatment of the model's uncertainties.
In this article, we survey methods that we developed to teach DNNs to be uncertain when they encounter new object classes.
We also present training methods to learn from only a few labels with help of uncertainty quantification.
arXiv Detail & Related papers (2022-05-30T08:31:03Z) - An Overview of Uncertainty Quantification Methods for Infinite Neural
Networks [0.0]
We review methods for quantifying uncertainty in infinite-width neural networks.
We make use of several equivalence results along the way to obtain exact closed-form solutions for predictive uncertainty.
arXiv Detail & Related papers (2022-01-13T00:03:22Z) - Provable Regret Bounds for Deep Online Learning and Control [77.77295247296041]
We show that any loss functions can be adapted to optimize the parameters of a neural network such that it competes with the best net in hindsight.
As an application of these results in the online setting, we obtain provable bounds for online control controllers.
arXiv Detail & Related papers (2021-10-15T02:13:48Z) - Building Compact and Robust Deep Neural Networks with Toeplitz Matrices [93.05076144491146]
This thesis focuses on the problem of training neural networks which are compact, easy to train, reliable and robust to adversarial examples.
We leverage the properties of structured matrices from the Toeplitz family to build compact and secure neural networks.
arXiv Detail & Related papers (2021-09-02T13:58:12Z) - A neural anisotropic view of underspecification in deep learning [60.119023683371736]
We show that the way neural networks handle the underspecification of problems is highly dependent on the data representation.
Our results highlight that understanding the architectural inductive bias in deep learning is fundamental to address the fairness, robustness, and generalization of these systems.
arXiv Detail & Related papers (2021-04-29T14:31:09Z) - An SMT-Based Approach for Verifying Binarized Neural Networks [1.4394939014120451]
We propose an SMT-based technique for verifying Binarized Neural Networks.
One novelty of our technique is that it allows the verification of neural networks that include both binarized and non-binarized components.
We implement our technique as an extension to the Marabou framework, and use it to evaluate the approach on popular binarized neural network architectures.
arXiv Detail & Related papers (2020-11-05T16:21:26Z) - Bayesian Neural Networks [0.0]
We show how errors in prediction by neural networks can be obtained in principle, and provide the two favoured methods for characterising these errors.
We will also describe how both of these methods have substantial pitfalls when put into practice.
arXiv Detail & Related papers (2020-06-02T09:43:00Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.