Improving Robustness Against Adversarial Attacks with Deeply Quantized
Neural Networks
- URL: http://arxiv.org/abs/2304.12829v1
- Date: Tue, 25 Apr 2023 13:56:35 GMT
- Title: Improving Robustness Against Adversarial Attacks with Deeply Quantized
Neural Networks
- Authors: Ferheen Ayaz, Idris Zakariyya, Jos\'e Cano, Sye Loong Keoh, Jeremy
Singer, Danilo Pau, Mounia Kharbouche-Harrari
- Abstract summary: A disadvantage of Deep Neural Networks (DNNs) is their vulnerability to adversarial attacks, as they can be fooled by adding slight perturbations to the inputs.
This paper reports the results of devising a tiny DNN model, robust to adversarial black and white box attacks, trained with an automatic quantizationaware training framework.
- Score: 0.5849513679510833
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Reducing the memory footprint of Machine Learning (ML) models, particularly
Deep Neural Networks (DNNs), is essential to enable their deployment into
resource-constrained tiny devices. However, a disadvantage of DNN models is
their vulnerability to adversarial attacks, as they can be fooled by adding
slight perturbations to the inputs. Therefore, the challenge is how to create
accurate, robust, and tiny DNN models deployable on resource-constrained
embedded devices. This paper reports the results of devising a tiny DNN model,
robust to adversarial black and white box attacks, trained with an automatic
quantizationaware training framework, i.e. QKeras, with deep quantization loss
accounted in the learning loop, thereby making the designed DNNs more accurate
for deployment on tiny devices. We investigated how QKeras and an adversarial
robustness technique, Jacobian Regularization (JR), can provide a
co-optimization strategy by exploiting the DNN topology and the per layer JR
approach to produce robust yet tiny deeply quantized DNN models. As a result, a
new DNN model implementing this cooptimization strategy was conceived,
developed and tested on three datasets containing both images and audio inputs,
as well as compared its performance with existing benchmarks against various
white-box and black-box attacks. Experimental results demonstrated that on
average our proposed DNN model resulted in 8.3% and 79.5% higher accuracy than
MLCommons/Tiny benchmarks in the presence of white-box and black-box attacks on
the CIFAR-10 image dataset and a subset of the Google Speech Commands audio
dataset respectively. It was also 6.5% more accurate for black-box attacks on
the SVHN image dataset.
Related papers
- VQUNet: Vector Quantization U-Net for Defending Adversarial Atacks by Regularizing Unwanted Noise [0.5755004576310334]
We introduce a novel noise-reduction procedure, Vector Quantization U-Net (VQUNet), to reduce adversarial noise and reconstruct data with high fidelity.
VQUNet features a discrete latent representation learning through a multi-scale hierarchical structure for both noise reduction and data reconstruction.
It outperforms other state-of-the-art noise-reduction-based defense methods under various adversarial attacks for both Fashion-MNIST and CIFAR10 datasets.
arXiv Detail & Related papers (2024-06-05T10:10:03Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Robust and Lossless Fingerprinting of Deep Neural Networks via Pooled
Membership Inference [17.881686153284267]
Deep neural networks (DNNs) have already achieved great success in a lot of application areas and brought profound changes to our society.
How to protect the intellectual property (IP) of DNNs against infringement is one of the most important yet very challenging topics.
This paper proposes a novel technique called emphpooled membership inference (PMI) so as to protect the IP of the DNN models.
arXiv Detail & Related papers (2022-09-09T04:06:29Z) - Weightless Neural Networks for Efficient Edge Inference [1.7882696915798877]
Weightless Neural Networks (WNNs) are a class of machine learning model which use table lookups to perform inference.
We propose a novel WNN architecture, BTHOWeN, with key algorithmic and architectural improvements over prior work.
BTHOWeN targets the large and growing edge computing sector by providing superior latency and energy efficiency.
arXiv Detail & Related papers (2022-03-03T01:46:05Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - DeepSteal: Advanced Model Extractions Leveraging Efficient Weight
Stealing in Memories [26.067920958354]
One of the major threats to the privacy of Deep Neural Networks (DNNs) is model extraction attacks.
Recent studies show hardware-based side channel attacks can reveal internal knowledge about DNN models (e.g., model architectures)
We propose an advanced model extraction attack framework DeepSteal that effectively steals DNN weights with the aid of memory side-channel attack.
arXiv Detail & Related papers (2021-11-08T16:55:45Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - Improving Query Efficiency of Black-box Adversarial Attack [75.71530208862319]
We propose a Neural Process based black-box adversarial attack (NP-Attack)
NP-Attack could greatly decrease the query counts under the black-box setting.
arXiv Detail & Related papers (2020-09-24T06:22:56Z) - EMPIR: Ensembles of Mixed Precision Deep Networks for Increased
Robustness against Adversarial Attacks [18.241639570479563]
Deep Neural Networks (DNNs) are vulnerable to adversarial attacks in which small input perturbations can produce catastrophic misclassifications.
We propose EMPIR, ensembles of quantized DNN models with different numerical precisions, as a new approach to increase robustness against adversarial attacks.
Our results indicate that EMPIR boosts the average adversarial accuracies by 42.6%, 15.2% and 10.5% for the DNN models trained on the MNIST, CIFAR-10 and ImageNet datasets respectively.
arXiv Detail & Related papers (2020-04-21T17:17:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.