RobustMQ: Benchmarking Robustness of Quantized Models
- URL: http://arxiv.org/abs/2308.02350v1
- Date: Fri, 4 Aug 2023 14:37:12 GMT
- Title: RobustMQ: Benchmarking Robustness of Quantized Models
- Authors: Yisong Xiao, Aishan Liu, Tianyuan Zhang, Haotong Qin, Jinyang Guo,
Xianglong Liu
- Abstract summary: Quantization is an essential technique for deploying deep neural networks (DNNs) on devices with limited resources.
We thoroughly evaluated the robustness of quantized models against various noises (adrial attacks, natural corruptions, and systematic noises) on ImageNet.
Our research contributes to advancing the robust quantization of models and their deployment in real-world scenarios.
- Score: 54.15661421492865
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantization has emerged as an essential technique for deploying deep neural
networks (DNNs) on devices with limited resources. However, quantized models
exhibit vulnerabilities when exposed to various noises in real-world
applications. Despite the importance of evaluating the impact of quantization
on robustness, existing research on this topic is limited and often disregards
established principles of robustness evaluation, resulting in incomplete and
inconclusive findings. To address this gap, we thoroughly evaluated the
robustness of quantized models against various noises (adversarial attacks,
natural corruptions, and systematic noises) on ImageNet. The comprehensive
evaluation results empirically provide valuable insights into the robustness of
quantized models in various scenarios, for example: (1) quantized models
exhibit higher adversarial robustness than their floating-point counterparts,
but are more vulnerable to natural corruptions and systematic noises; (2) in
general, increasing the quantization bit-width results in a decrease in
adversarial robustness, an increase in natural robustness, and an increase in
systematic robustness; (3) among corruption methods, \textit{impulse noise} and
\textit{glass blur} are the most harmful to quantized models, while
\textit{brightness} has the least impact; (4) among systematic noises, the
\textit{nearest neighbor interpolation} has the highest impact, while bilinear
interpolation, cubic interpolation, and area interpolation are the three least
harmful. Our research contributes to advancing the robust quantization of
models and their deployment in real-world scenarios.
Related papers
- Investigating the Impact of Quantization on Adversarial Robustness [22.637585106574722]
Quantization is a technique for reducing the bit-width of deep models to improve their runtime performance and storage efficiency.
In real-world scenarios, quantized models are often faced with adversarial attacks which cause the model to make incorrect inferences.
We conduct a first-time analysis of the impact of the quantization pipeline components that can incorporate robust optimization.
arXiv Detail & Related papers (2024-04-08T16:20:15Z) - The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Benchmarking the Robustness of Quantized Models [12.587947681480909]
Quantization is an essential technique for deploying deep neural networks (DNNs) on devices with limited resources.
Existing research on this topic is limited and often disregards established principles of evaluation.
Our research contributes to advancing the robust quantization of models and their deployment in real-world scenarios.
arXiv Detail & Related papers (2023-04-08T09:34:55Z) - Fairness Increases Adversarial Vulnerability [50.90773979394264]
This paper shows the existence of a dichotomy between fairness and robustness, and analyzes when achieving fairness decreases the model robustness to adversarial samples.
Experiments on non-linear models and different architectures validate the theoretical findings in multiple vision domains.
The paper proposes a simple, yet effective, solution to construct models achieving good tradeoffs between fairness and robustness.
arXiv Detail & Related papers (2022-11-21T19:55:35Z) - Mixed-Precision Inference Quantization: Radically Towards Faster
inference speed, Lower Storage requirement, and Lower Loss [4.877532217193618]
Existing quantization techniques rely heavily on experience and "fine-tuning" skills.
This study provides a methodology for acquiring a mixed-precise quantization model with a lower loss than the full precision model.
In particular, we will demonstrate that neural networks with massive identity mappings are resistant to the quantization method.
arXiv Detail & Related papers (2022-07-20T10:55:34Z) - Quantifying Robustness to Adversarial Word Substitutions [24.164523751390053]
Deep-learning-based NLP models are found to be vulnerable to word substitution perturbations.
We propose a formal framework to evaluate word-level robustness.
metric helps us figure out why state-of-the-art models like BERT can be easily fooled by a few word substitutions.
arXiv Detail & Related papers (2022-01-11T08:18:39Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Non-Singular Adversarial Robustness of Neural Networks [58.731070632586594]
Adrial robustness has become an emerging challenge for neural network owing to its over-sensitivity to small input perturbations.
We formalize the notion of non-singular adversarial robustness for neural networks through the lens of joint perturbations to data inputs as well as model weights.
arXiv Detail & Related papers (2021-02-23T20:59:30Z) - Multiplicative noise and heavy tails in stochastic optimization [62.993432503309485]
empirical optimization is central to modern machine learning, but its role in its success is still unclear.
We show that it commonly arises in parameters of discrete multiplicative noise due to variance.
A detailed analysis is conducted in which we describe on key factors, including recent step size, and data, all exhibit similar results on state-of-the-art neural network models.
arXiv Detail & Related papers (2020-06-11T09:58:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.