DeepBern-Nets: Taming the Complexity of Certifying Neural Networks using
Bernstein Polynomial Activations and Precise Bound Propagation
- URL: http://arxiv.org/abs/2305.13508v1
- Date: Mon, 22 May 2023 21:52:57 GMT
- Title: DeepBern-Nets: Taming the Complexity of Certifying Neural Networks using
Bernstein Polynomial Activations and Precise Bound Propagation
- Authors: Haitham Khedr and Yasser Shoukry
- Abstract summary: We introduce DeepBerns, a class of NNs with activation functions based on Bernsteins instead of ReLU activation.
We design a novel algorithm, called Bern-IBP, to efficiently compute tight bounds on DeepBern-Nets outputs.
This work establishes Bernstein activation as a promising alternative for improving NN certification tasks across various applications.
- Score: 1.4620086904601473
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Formal certification of Neural Networks (NNs) is crucial for ensuring their
safety, fairness, and robustness. Unfortunately, on the one hand, sound and
complete certification algorithms of ReLU-based NNs do not scale to large-scale
NNs. On the other hand, incomplete certification algorithms are easier to
compute, but they result in loose bounds that deteriorate with the depth of NN,
which diminishes their effectiveness. In this paper, we ask the following
question; can we replace the ReLU activation function with one that opens the
door to incomplete certification algorithms that are easy to compute but can
produce tight bounds on the NN's outputs? We introduce DeepBern-Nets, a class
of NNs with activation functions based on Bernstein polynomials instead of the
commonly used ReLU activation. Bernstein polynomials are smooth and
differentiable functions with desirable properties such as the so-called range
enclosure and subdivision properties. We design a novel algorithm, called
Bern-IBP, to efficiently compute tight bounds on DeepBern-Nets outputs. Our
approach leverages the properties of Bernstein polynomials to improve the
tractability of neural network certification tasks while maintaining the
accuracy of the trained networks. We conduct comprehensive experiments in
adversarial robustness and reachability analysis settings to assess the
effectiveness of the proposed Bernstein polynomial activation in enhancing the
certification process. Our proposed framework achieves high certified accuracy
for adversarially-trained NNs, which is often a challenging task for certifiers
of ReLU-based NNs. Moreover, using Bern-IBP bounds for certified training
results in NNs with state-of-the-art certified accuracy compared to ReLU
networks. This work establishes Bernstein polynomial activation as a promising
alternative for improving NN certification tasks across various applications.
Related papers
- Tight Verification of Probabilistic Robustness in Bayesian Neural
Networks [17.499817915644467]
We introduce two algorithms for computing tight guarantees on the probabilistic robustness of Bayesian Neural Networks (BNNs)
Our algorithms efficiently search the parameters' space for safe weights by using iterative expansion and the network's gradient.
In addition to proving that our algorithms compute tighter bounds than the SoA, we also evaluate our algorithms against the SoA on standard benchmarks.
arXiv Detail & Related papers (2024-01-21T23:41:32Z) - An Automata-Theoretic Approach to Synthesizing Binarized Neural Networks [13.271286153792058]
Quantized neural networks (QNNs) have been developed, with binarized neural networks (BNNs) restricted to binary values as a special case.
This paper presents an automata-theoretic approach to synthesizing BNNs that meet designated properties.
arXiv Detail & Related papers (2023-07-29T06:27:28Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Sound and Complete Verification of Polynomial Networks [55.9260539566555]
Polynomial Networks (PNs) have demonstrated promising performance on face and image recognition recently.
Existing verification algorithms on ReLU neural networks (NNs) based on branch and bound (BaB) techniques cannot be trivially applied to PN verification.
We devise a new bounding method, equipped with BaB for global convergence guarantees, called VPN.
arXiv Detail & Related papers (2022-09-15T11:50:43Z) - Can pruning improve certified robustness of neural networks? [106.03070538582222]
We show that neural network pruning can improve empirical robustness of deep neural networks (NNs)
Our experiments show that by appropriately pruning an NN, its certified accuracy can be boosted up to 8.2% under standard training.
We additionally observe the existence of certified lottery tickets that can match both standard and certified robust accuracies of the original dense models.
arXiv Detail & Related papers (2022-06-15T05:48:51Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - A Mixed Integer Programming Approach for Verifying Properties of
Binarized Neural Networks [44.44006029119672]
We propose a mixed integer programming formulation for BNN verification.
We demonstrate our approach by verifying properties of BNNs trained on the MNIST dataset and an aircraft collision avoidance controller.
arXiv Detail & Related papers (2022-03-11T01:11:29Z) - Certification of Iterative Predictions in Bayesian Neural Networks [79.15007746660211]
We compute lower bounds for the probability that trajectories of the BNN model reach a given set of states while avoiding a set of unsafe states.
We use the lower bounds in the context of control and reinforcement learning to provide safety certification for given control policies.
arXiv Detail & Related papers (2021-05-21T05:23:57Z) - Encoding the latent posterior of Bayesian Neural Networks for
uncertainty quantification [10.727102755903616]
We aim for efficient deep BNNs amenable to complex computer vision architectures.
We achieve this by leveraging variational autoencoders (VAEs) to learn the interaction and the latent distribution of the parameters at each network layer.
Our approach, Latent-Posterior BNN (LP-BNN), is compatible with the recent BatchEnsemble method, leading to highly efficient (in terms of computation and memory during both training and testing) ensembles.
arXiv Detail & Related papers (2020-12-04T19:50:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.