On the Effects of Quantisation on Model Uncertainty in Bayesian Neural
  Networks
        - URL: http://arxiv.org/abs/2102.11062v1
 - Date: Mon, 22 Feb 2021 14:36:29 GMT
 - Title: On the Effects of Quantisation on Model Uncertainty in Bayesian Neural
  Networks
 - Authors: Martin Ferianc, Partha Maji, Matthew Mattina and Miguel Rodrigues
 - Abstract summary: Being able to quantify uncertainty while making decisions is essential for understanding when the model is over-/under-confident.
BNNs have not been as widely used in industrial practice, mainly because of their increased memory and compute costs.
We study three types of quantised BNNs, we evaluate them under a wide range of different settings, and we empirically demonstrate that an uniform quantisation scheme applied to BNNs does not substantially decrease their quality of uncertainty estimation.
 - Score: 8.234236473681472
 - License: http://creativecommons.org/licenses/by-nc-sa/4.0/
 - Abstract:   Bayesian neural networks (BNNs) are making significant progress in many
research areas where decision making needs to be accompanied by uncertainty
estimation. Being able to quantify uncertainty while making decisions is
essential for understanding when the model is over-/under-confident, and hence
BNNs are attracting interest in safety-critical applications, such as
autonomous driving, healthcare and robotics. Nevertheless, BNNs have not been
as widely used in industrial practice, mainly because of their increased memory
and compute costs. In this work, we investigate quantisation of BNNs by
compressing 32-bit floating-point weights and activations to their integer
counterparts, that has already been successful in reducing the compute demand
in standard pointwise neural networks. We study three types of quantised BNNs,
we evaluate them under a wide range of different settings, and we empirically
demonstrate that an uniform quantisation scheme applied to BNNs does not
substantially decrease their quality of uncertainty estimation.
 
       
      
        Related papers
        - Efficient Certified Reasoning for Binarized Neural Networks [25.20597060311209]
Binarized Neural Networks (BNNs) are a type of neural network where each neuron is constrained to a Boolean value.<n>Existing methods for BNN analysis suffer from limited scalability or susceptibility to soundness errors.<n>We present a scalable and trustworthy approach for both qualitative and quantitative verification of BNNs.
arXiv  Detail & Related papers  (2025-06-25T09:27:02Z) - NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv  Detail & Related papers  (2024-08-28T02:17:58Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv  Detail & Related papers  (2024-03-11T21:54:52Z) - Bayesian Neural Networks with Domain Knowledge Priors [52.80929437592308]
We propose a framework for integrating general forms of domain knowledge into a BNN prior.
We show that BNNs using our proposed domain knowledge priors outperform those with standard priors.
arXiv  Detail & Related papers  (2024-02-20T22:34:53Z) - Make Me a BNN: A Simple Strategy for Estimating Bayesian Uncertainty
  from Pre-trained Models [40.38541033389344]
Deep Neural Networks (DNNs) are powerful tools for various computer vision tasks, yet they often struggle with reliable uncertainty quantification.
We introduce the Adaptable Bayesian Neural Network (ABNN), a simple and scalable strategy to seamlessly transform DNNs into BNNs.
We conduct extensive experiments across multiple datasets for image classification and semantic segmentation tasks, and our results demonstrate that ABNN achieves state-of-the-art performance.
arXiv  Detail & Related papers  (2023-12-23T16:39:24Z) - An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks [13.271286153792058]
Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case.
This paper presents an automata-theoretic approach to synthesizing BNNs that meet designated properties.
arXiv  Detail & Related papers  (2023-07-29T06:27:28Z) - Efficient Uncertainty Estimation in Spiking Neural Networks via
  MC-dropout [3.692069129522824]
Spiking neural networks (SNNs) have gained attention as models of sparse and event-driven communication of biological neurons.
We propose an efficient Monte Carlo(MC)-dropout based approach for uncertainty estimation in SNNs.
arXiv  Detail & Related papers  (2023-04-20T10:05:57Z) - Posterior Regularized Bayesian Neural Network Incorporating Soft and
  Hard Knowledge Constraints [12.050265348673078]
We propose a novel Posterior-Regularized Bayesian Neural Network (PR-BNN) model by incorporating different types of knowledge constraints.
Experiments in simulation and two case studies about aviation landing prediction and solar energy output prediction have shown the knowledge constraints and the performance improvement of the proposed model.
arXiv  Detail & Related papers  (2022-10-16T18:58:50Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv  Detail & Related papers  (2021-12-12T17:13:14Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv  Detail & Related papers  (2021-11-16T16:14:44Z) - BDD4BNN: A BDD-based Quantitative Analysis Framework for Binarized
  Neural Networks [7.844146033635129]
We study verification problems for Binarized Neural Networks (BNNs), the 1-bit quantization of general real-numbered neural networks.
Our approach is to encode BNNs into Binary Decision Diagrams (BDDs), which is done by exploiting the internal structure of the BNNs.
Based on the encoding, we develop a quantitative verification framework for BNNs where precise and comprehensive analysis of BNNs can be performed.
arXiv  Detail & Related papers  (2021-03-12T12:02:41Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
  Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv  Detail & Related papers  (2021-02-17T18:59:28Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
  Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv  Detail & Related papers  (2020-06-20T22:45:32Z) 
        This list is automatically generated from the titles and abstracts of the papers in this site.
       
     
           This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.