Integer-arithmetic-only Certified Robustness for Quantized Neural
Networks
- URL: http://arxiv.org/abs/2108.09413v1
- Date: Sat, 21 Aug 2021 01:15:19 GMT
- Title: Integer-arithmetic-only Certified Robustness for Quantized Neural
Networks
- Authors: Haowen Lin, Jian Lou, Li Xiong and Cyrus Shahabi
- Abstract summary: A line of work on tackling adversarial examples is certified robustness via randomized smoothing.
Such a mechanism usually uses floating-point arithmetic for calculations in inference.
We show our approach can obtain a comparable accuracy and 4x5x speedup over floating-point arithmetic certified robust methods.
- Score: 14.737638416823772
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial data examples have drawn significant attention from the machine
learning and security communities. A line of work on tackling adversarial
examples is certified robustness via randomized smoothing that can provide a
theoretical robustness guarantee. However, such a mechanism usually uses
floating-point arithmetic for calculations in inference and requires large
memory footprints and daunting computational costs. These defensive models
cannot run efficiently on edge devices nor be deployed on integer-only logical
units such as Turing Tensor Cores or integer-only ARM processors. To overcome
these challenges, we propose an integer randomized smoothing approach with
quantization to convert any classifier into a new smoothed classifier, which
uses integer-only arithmetic for certified robustness against adversarial
perturbations. We prove a tight robustness guarantee under L2-norm for the
proposed approach. We show our approach can obtain a comparable accuracy and
4x~5x speedup over floating-point arithmetic certified robust methods on
general-purpose CPUs and mobile devices on two distinct datasets (CIFAR-10 and
Caltech-101).
Related papers
- Few-Shot Testing: Estimating Uncertainty of Memristive Deep Neural Networks Using One Bayesian Test Vector [0.0]
We propose a test vector generation framework that can estimate the model uncertainty of NNs implemented on memristor-based CIM hardware.
Our method is evaluated on different model dimensions, tasks, fault rates, and variation noise to show that it can consistently achieve $100%$ coverage with only $0.024$ MB of memory overhead.
arXiv Detail & Related papers (2024-05-29T08:53:16Z) - Quantum Sparse Coding [5.130440339897477]
We develop a quantum-inspired algorithm for sparse coding.
The emergence of quantum computers and Ising machines can potentially lead to more accurate estimations.
We conduct numerical experiments with simulated data on Lightr's quantum-inspired digital platform.
arXiv Detail & Related papers (2022-09-08T13:00:30Z) - Getting a-Round Guarantees: Floating-Point Attacks on Certified Robustness [19.380453459873298]
Adversarial examples pose a security risk as they can alter decisions of a machine learning classifier through slight input perturbations.
We show that these guarantees can be invalidated due to limitations of floating-point representation that cause rounding errors.
We show that the attack can be carried out against linear classifiers that have exact certifiable guarantees and against neural networks that have conservative certifications.
arXiv Detail & Related papers (2022-05-20T13:07:36Z) - REx: Data-Free Residual Quantization Error Expansion [32.87131159997359]
Deep neural networks (DNNs) are ubiquitous in computer vision and natural language processing, but suffer from high inference cost.
With the growing concerns on privacy rights, we focus our efforts on data-free methods.
We propose REx, a quantization method that leverages residual error expansion, along with group sparsity and an ensemble approximation for better parallelization.
arXiv Detail & Related papers (2022-03-28T11:04:45Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Improved, Deterministic Smoothing for L1 Certified Robustness [119.86676998327864]
We propose a non-additive and deterministic smoothing method, Deterministic Smoothing with Splitting Noise (DSSN)
In contrast to uniform additive smoothing, the SSN certification does not require the random noise components used to be independent.
This is the first work to provide deterministic "randomized smoothing" for a norm-based adversarial threat model.
arXiv Detail & Related papers (2021-03-17T21:49:53Z) - I-BERT: Integer-only BERT Quantization [78.43819756382103]
We propose I-BERT, a novel quantization scheme for Transformer based models.
I-BERT performs an end-to-end integer-only BERT inference without any floating point calculation.
We show that for both cases, I-BERT achieves similar (and slightly higher) accuracy as compared to the full-precision baseline.
arXiv Detail & Related papers (2021-01-05T02:42:58Z) - AQD: Towards Accurate Fully-Quantized Object Detection [94.06347866374927]
We propose an Accurate Quantized object Detection solution, termed AQD, to get rid of floating-point computation.
Our AQD achieves comparable or even better performance compared with the full-precision counterpart under extremely low-bit schemes.
arXiv Detail & Related papers (2020-07-14T09:07:29Z) - Hidden Cost of Randomized Smoothing [72.93630656906599]
In this paper, we point out the side effects of current randomized smoothing.
Specifically, we articulate and prove two major points: 1) the decision boundaries of smoothed classifiers will shrink, resulting in disparity in class-wise accuracy; 2) applying noise augmentation in the training process does not necessarily resolve the shrinking issue due to the inconsistent learning objectives.
arXiv Detail & Related papers (2020-03-02T23:37:42Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.