BDD4BNN: A BDD-based Quantitative Analysis Framework for Binarized
Neural Networks
- URL: http://arxiv.org/abs/2103.07224v1
- Date: Fri, 12 Mar 2021 12:02:41 GMT
- Title: BDD4BNN: A BDD-based Quantitative Analysis Framework for Binarized
Neural Networks
- Authors: Yedi Zhang and Zhe Zhao and Guangke Chen and Fu Song and Taolue Chen
- Abstract summary: We study verification problems for Binarized Neural Networks (BNNs), the 1-bit quantization of general real-numbered neural networks.
Our approach is to encode BNNs into Binary Decision Diagrams (BDDs), which is done by exploiting the internal structure of the BNNs.
Based on the encoding, we develop a quantitative verification framework for BNNs where precise and comprehensive analysis of BNNs can be performed.
- Score: 7.844146033635129
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Verifying and explaining the behavior of neural networks is becoming
increasingly important, especially when they are deployed in safety-critical
applications. In this paper, we study verification problems for Binarized
Neural Networks (BNNs), the 1-bit quantization of general real-numbered neural
networks. Our approach is to encode BNNs into Binary Decision Diagrams (BDDs),
which is done by exploiting the internal structure of the BNNs. In particular,
we translate the input-output relation of blocks in BNNs to cardinality
constraints which are then encoded by BDDs. Based on the encoding, we develop a
quantitative verification framework for BNNs where precise and comprehensive
analysis of BNNs can be performed. We demonstrate the application of our
framework by providing quantitative robustness analysis and interpretability
for BNNs. We implement a prototype tool BDD4BNN and carry out extensive
experiments which confirm the effectiveness and efficiency of our approach.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks [13.271286153792058]
Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case.
This paper presents an automata-theoretic approach to synthesizing BNNs that meet designated properties.
arXiv Detail & Related papers (2023-07-29T06:27:28Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - A comprehensive review of Binary Neural Network [2.918940961856197]
Binary Neural Network (BNN) method is an extreme application of convolutional neural network (CNN) parameter quantization.
Recent developments in BNN have led to a lot of algorithms and solutions that have helped address this issue.
arXiv Detail & Related papers (2021-10-11T22:44:15Z) - BCNN: Binary Complex Neural Network [16.82755328827758]
Binarized neural networks, or BNNs, show great promise in edge-side applications with resource limited hardware.
We introduce complex representation into the BNNs and propose Binary complex neural network.
BCNN improves BNN by strengthening its learning capability through complex representation and extending its applicability to complex-valued input data.
arXiv Detail & Related papers (2021-03-28T03:35:24Z) - On the Effects of Quantisation on Model Uncertainty in Bayesian Neural
Networks [8.234236473681472]
Being able to quantify uncertainty while making decisions is essential for understanding when the model is over-/under-confident.
BNNs have not been as widely used in industrial practice, mainly because of their increased memory and compute costs.
We study three types of quantised BNNs, we evaluate them under a wide range of different settings, and we empirically demonstrate that an uniform quantisation scheme applied to BNNs does not substantially decrease their quality of uncertainty estimation.
arXiv Detail & Related papers (2021-02-22T14:36:29Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Understanding Learning Dynamics of Binary Neural Networks via
Information Bottleneck [11.17667928756077]
Binary Neural Networks (BNNs) take compactification to the extreme by constraining both weights and activations to two levels, $+1, -1$.
We analyze BNN training through the Information Bottleneck principle and observe that the training dynamics of BNNs is considerably different from that of Deep Neural Networks (DNNs)
Since BNNs have a less expressive capacity, they tend to find efficient hidden representations concurrently with label fitting.
arXiv Detail & Related papers (2020-06-13T00:39:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.