Quantization Backdoors to Deep Learning Commercial Frameworks
- URL: http://arxiv.org/abs/2108.09187v3
- Date: Thu, 27 Apr 2023 06:08:27 GMT
- Title: Quantization Backdoors to Deep Learning Commercial Frameworks
- Authors: Hua Ma, Huming Qiu, Yansong Gao, Zhi Zhang, Alsharif Abuadbba, Minhui
Xue, Anmin Fu, Zhang Jiliang, Said Al-Sarawi, Derek Abbott
- Abstract summary: We show that the standard quantization toolkits can be abused to activate a backdoor.
This work highlights that a stealthy security threat occurs when an end user utilizes the on-device post-training model quantization frameworks.
- Score: 16.28615808834053
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Currently, there is a burgeoning demand for deploying deep learning (DL)
models on ubiquitous edge Internet of Things (IoT) devices attributed to their
low latency and high privacy preservation. However, DL models are often large
in size and require large-scale computation, which prevents them from being
placed directly onto IoT devices, where resources are constrained and 32-bit
floating-point (float-32) operations are unavailable. Commercial framework
(i.e., a set of toolkits) empowered model quantization is a pragmatic solution
that enables DL deployment on mobile devices and embedded systems by
effortlessly post-quantizing a large high-precision model (e.g., float-32) into
a small low-precision model (e.g., int-8) while retaining the model inference
accuracy. However, their usability might be threatened by security
vulnerabilities.
This work reveals that the standard quantization toolkits can be abused to
activate a backdoor. We demonstrate that a full-precision backdoored model
which does not have any backdoor effect in the presence of a trigger -- as the
backdoor is dormant -- can be activated by the default i) TensorFlow-Lite
(TFLite) quantization, the only product-ready quantization framework to date,
and ii) the beta released PyTorch Mobile framework. When each of the float-32
models is converted into an int-8 format model through the standard TFLite or
Pytorch Mobile framework's post-training quantization, the backdoor is
activated in the quantized model, which shows a stable attack success rate
close to 100% upon inputs with the trigger, while it behaves normally upon
non-trigger inputs. This work highlights that a stealthy security threat occurs
when an end user utilizes the on-device post-training model quantization
frameworks, informing security researchers of cross-platform overhaul of DL
models post quantization even if these models pass front-end backdoor
inspections.
Related papers
- Model-less Is the Best Model: Generating Pure Code Implementations to Replace On-Device DL Models [29.635329143403368]
deployed deep learning (DL) models can be easily extracted from real-world applications and devices by attackers.
Traditional software protection techniques have been widely explored, if on-device models can be implemented using pure code, such as C++, it will open the possibility of reusing existing software protection techniques.
We propose a novel method, CustomDLCoder, to automatically extract the on-device model information and synthesize a customized executable program.
arXiv Detail & Related papers (2024-03-25T07:06:53Z) - Model X-ray:Detect Backdoored Models via Decision Boundary [66.41173675107886]
Deep neural networks (DNNs) have revolutionized various industries, leading to the rise of Machine Learning as a Service (ML)
DNNs are susceptible to backdoor attacks, which pose significant risks to their applications.
We propose Model X-ray, a novel backdoor detection approach for ML through the analysis of decision boundaries.
arXiv Detail & Related papers (2024-02-27T12:42:07Z) - Watermarking LLMs with Weight Quantization [61.63899115699713]
This paper proposes a novel watermarking strategy that plants watermarks in the quantization process of large language models.
We successfully plant the watermark into open-source large language model weights including GPT-Neo and LLaMA.
arXiv Detail & Related papers (2023-10-17T13:06:59Z) - Fault Injection and Safe-Error Attack for Extraction of Embedded Neural
Network Models [1.3654846342364308]
We focus on embedded deep neural network models on 32-bit microcontrollers in the Internet of Things (IoT)
We propose a black-box approach to craft a successful attack set.
For a classical convolutional neural network, we successfully recover at least 90% of the most significant bits with about 1500 crafted inputs.
arXiv Detail & Related papers (2023-08-31T13:09:33Z) - One-bit Flip is All You Need: When Bit-flip Attack Meets Model Training [54.622474306336635]
A new weight modification attack called bit flip attack (BFA) was proposed, which exploits memory fault inject techniques.
We propose a training-assisted bit flip attack, in which the adversary is involved in the training stage to build a high-risk model to release.
arXiv Detail & Related papers (2023-08-12T09:34:43Z) - QuMoS: A Framework for Preserving Security of Quantum Machine Learning
Model [10.543277412560233]
Security has always been a critical issue in machine learning (ML) applications.
Model-stealing attack is one of the most fundamental but vitally important issues.
We propose a novel framework, namely QuMoS, to preserve model security.
arXiv Detail & Related papers (2023-04-23T01:17:43Z) - Publishing Efficient On-device Models Increases Adversarial
Vulnerability [58.6975494957865]
In this paper, we study the security considerations of publishing on-device variants of large-scale models.
We first show that an adversary can exploit on-device models to make attacking the large models easier.
We then show that the vulnerability increases as the similarity between a full-scale and its efficient model increase.
arXiv Detail & Related papers (2022-12-28T05:05:58Z) - Backdoor Attacks on Crowd Counting [63.90533357815404]
Crowd counting is a regression task that estimates the number of people in a scene image.
In this paper, we investigate the vulnerability of deep learning based crowd counting models to backdoor attacks.
arXiv Detail & Related papers (2022-07-12T16:17:01Z) - DeepSight: Mitigating Backdoor Attacks in Federated Learning Through
Deep Model Inspection [26.593268413299228]
Federated Learning (FL) allows multiple clients to collaboratively train a Neural Network (NN) model on their private data without revealing the data.
DeepSight is a novel model filtering approach for mitigating backdoor attacks.
We show that it can mitigate state-of-the-art backdoor attacks with a negligible impact on the model's performance on benign data.
arXiv Detail & Related papers (2022-01-03T17:10:07Z) - Qu-ANTI-zation: Exploiting Quantization Artifacts for Achieving
Adversarial Outcomes [5.865029600972316]
Quantization is a technique that transforms the parameter representation of a neural network from floating-point numbers into lower-precision ones.
We propose a new training framework to implement adversarial quantization outcomes.
We show that a single compromised model defeats multiple quantization schemes.
arXiv Detail & Related papers (2021-10-26T10:09:49Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.