On the Adversarial Robustness of Quantized Neural Networks
- URL: http://arxiv.org/abs/2105.00227v1
- Date: Sat, 1 May 2021 11:46:35 GMT
- Title: On the Adversarial Robustness of Quantized Neural Networks
- Authors: Micah Gorsline, James Smith, Cory Merkel
- Abstract summary: It is unclear how model compression techniques may affect the robustness of AI algorithms against adversarial attacks.
This paper explores the effect of quantization, one of the most common compression techniques, on the adversarial robustness of neural networks.
- Score: 2.0625936401496237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Reducing the size of neural network models is a critical step in moving AI
from a cloud-centric to an edge-centric (i.e. on-device) compute paradigm. This
shift from cloud to edge is motivated by a number of factors including reduced
latency, improved security, and higher flexibility of AI algorithms across
several application domains (e.g. transportation, healthcare, defense, etc.).
However, it is currently unclear how model compression techniques may affect
the robustness of AI algorithms against adversarial attacks. This paper
explores the effect of quantization, one of the most common compression
techniques, on the adversarial robustness of neural networks. Specifically, we
investigate and model the accuracy of quantized neural networks on
adversarially-perturbed images. Results indicate that for simple gradient-based
attacks, quantization can either improve or degrade adversarial robustness
depending on the attack strength.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Dynamics-aware Adversarial Attack of Adaptive Neural Networks [75.50214601278455]
We investigate the dynamics-aware adversarial attack problem of adaptive neural networks.
We propose a Leaded Gradient Method (LGM) and show the significant effects of the lagged gradient.
Our LGM achieves impressive adversarial attack performance compared with the dynamic-unaware attack methods.
arXiv Detail & Related papers (2022-10-15T01:32:08Z) - Self-Healing Robust Neural Networks via Closed-Loop Control [23.360913637445964]
A typical self-healing mechanism is the immune system of a human body.
This paper considers the post-training self-healing of a neural network.
We propose a closed-loop control formulation to automatically detect and fix the errors caused by various attacks or perturbations.
arXiv Detail & Related papers (2022-06-26T20:25:35Z) - Understanding Adversarial Robustness from Feature Maps of Convolutional
Layers [23.42376264664302]
Anti-perturbation ability of a neural network mainly relies on two factors: model capacity and anti-perturbation ability.
We study the anti-perturbation ability of the network from the feature maps of convolutional layers.
Non-trivial improvements in terms of both natural accuracy and adversarial robustness can be achieved under various attack and defense mechanisms.
arXiv Detail & Related papers (2022-02-25T00:14:59Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - A Layer-wise Adversarial-aware Quantization Optimization for Improving
Robustness [4.794745827538956]
We find that adversarially-trained neural networks are more vulnerable to quantization loss than plain models.
We propose a layer-wise adversarial-aware quantization method, using the Lipschitz constant to choose the best quantization parameter settings for a neural network.
Experiment results show that our method can effectively and efficiently improve the robustness of quantized adversarially-trained neural networks.
arXiv Detail & Related papers (2021-10-23T22:11:30Z) - Pruning in the Face of Adversaries [0.0]
We evaluate the impact of neural network pruning on the adversarial robustness against L-0, L-2 and L-infinity attacks.
Our results confirm that neural network pruning and adversarial robustness are not mutually exclusive.
We extend our analysis to situations that incorporate additional assumptions on the adversarial scenario and show that depending on the situation, different strategies are optimal.
arXiv Detail & Related papers (2021-08-19T09:06:16Z) - Neural Architecture Dilation for Adversarial Robustness [56.18555072877193]
A shortcoming of convolutional neural networks is that they are vulnerable to adversarial attacks.
This paper aims to improve the adversarial robustness of the backbone CNNs that have a satisfactory accuracy.
Under a minimal computational overhead, a dilation architecture is expected to be friendly with the standard performance of the backbone CNN.
arXiv Detail & Related papers (2021-08-16T03:58:00Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Improved Gradient based Adversarial Attacks for Quantized Networks [15.686134908061995]
We show that quantized networks suffer from gradient vanishing issues and show a fake sense of robustness.
By attributing gradient vanishing to poor forward-backward signal propagation in the trained network, we introduce a simple temperature scaling approach to mitigate this issue.
arXiv Detail & Related papers (2020-03-30T14:34:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.