No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN
Partition for On-Device ML
- URL: http://arxiv.org/abs/2310.07152v1
- Date: Wed, 11 Oct 2023 02:54:52 GMT
- Title: No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN
Partition for On-Device ML
- Authors: Ziqi Zhang, Chen Gong, Yifeng Cai, Yuanyuan Yuan, Bingyan Liu, Ding
Li, Yao Guo, Xiangqun Chen
- Abstract summary: We show that existing TSDP solutions are vulnerable to privacy-stealing attacks and are not as safe as commonly believed.
We present TEESlice, a novel TSDP method that defends against MS and MIA during DNN inference.
- Score: 28.392497220631032
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: On-device ML introduces new security challenges: DNN models become white-box
accessible to device users. Based on white-box information, adversaries can
conduct effective model stealing (MS) and membership inference attack (MIA).
Using Trusted Execution Environments (TEEs) to shield on-device DNN models aims
to downgrade (easy) white-box attacks to (harder) black-box attacks. However,
one major shortcoming is the sharply increased latency (up to 50X). To
accelerate TEE-shield DNN computation with GPUs, researchers proposed several
model partition techniques. These solutions, referred to as TEE-Shielded DNN
Partition (TSDP), partition a DNN model into two parts, offloading the
privacy-insensitive part to the GPU while shielding the privacy-sensitive part
within the TEE. This paper benchmarks existing TSDP solutions using both MS and
MIA across a variety of DNN models, datasets, and metrics. We show important
findings that existing TSDP solutions are vulnerable to privacy-stealing
attacks and are not as safe as commonly believed. We also unveil the inherent
difficulty in deciding optimal DNN partition configurations (i.e., the highest
security with minimal utility cost) for present TSDP solutions. The experiments
show that such ``sweet spot'' configurations vary across datasets and models.
Based on lessons harvested from the experiments, we present TEESlice, a novel
TSDP method that defends against MS and MIA during DNN inference. TEESlice
follows a partition-before-training strategy, which allows for accurate
separation between privacy-related weights from public weights. TEESlice
delivers the same security protection as shielding the entire DNN model inside
TEE (the ``upper-bound'' security guarantees) with over 10X less overhead (in
both experimental and real-world environments) than prior TSDP solutions and no
accuracy loss.
Related papers
- TEESlice: Protecting Sensitive Neural Network Models in Trusted Execution Environments When Attackers have Pre-Trained Models [12.253529209143197]
TSDP is a method that protects privacy-sensitive weights within TEEs and offloads insensitive weights to GPUs.
We introduce a novel partition before training strategy, which effectively separates privacy-sensitive weights from other components of the model.
Our evaluation demonstrates that our approach can offer full model protection with a computational cost reduced by a factor of 10.
arXiv Detail & Related papers (2024-11-15T04:52:11Z) - DNNShield: Embedding Identifiers for Deep Neural Network Ownership Verification [46.47446944218544]
This paper introduces DNNShield, a novel approach for protection of Deep Neural Networks (DNNs)
DNNShield embeds unique identifiers within the model architecture using specialized protection layers.
We validate the effectiveness and efficiency of DNNShield through extensive evaluations across three datasets and four model architectures.
arXiv Detail & Related papers (2024-03-11T10:27:36Z) - MatchNAS: Optimizing Edge AI in Sparse-Label Data Contexts via
Automating Deep Neural Network Porting for Mobile Deployment [54.77943671991863]
MatchNAS is a novel scheme for porting Deep Neural Networks to mobile devices.
We optimise a large network family using both labelled and unlabelled data.
We then automatically search for tailored networks for different hardware platforms.
arXiv Detail & Related papers (2024-02-21T04:43:12Z) - Scaling #DNN-Verification Tools with Efficient Bound Propagation and
Parallel Computing [57.49021927832259]
Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary results in many scenarios.
However, their intricate designs and lack of transparency raise safety concerns when applied in real-world applications.
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide provable guarantees on the safety aspect.
arXiv Detail & Related papers (2023-12-10T13:51:25Z) - MirrorNet: A TEE-Friendly Framework for Secure On-device DNN Inference [14.08010398777227]
Deep neural network (DNN) models have become prevalent in edge devices for real-time inference.
Existing defense approaches fail to fully safeguard model confidentiality or result in significant latency issues.
This paper presents MirrorNet, which generates a TEE-friendly implementation for any given DNN model to protect the model confidentiality.
For the evaluation, MirrorNet can achieve a 18.6% accuracy gap between authenticated and illegal use, while only introducing 0.99% hardware overhead.
arXiv Detail & Related papers (2023-11-16T01:21:19Z) - Enumerating Safe Regions in Deep Neural Networks with Provable
Probabilistic Guarantees [86.1362094580439]
We introduce the AllDNN-Verification problem: given a safety property and a DNN, enumerate the set of all the regions of the property input domain which are safe.
Due to the #P-hardness of the problem, we propose an efficient approximation method called epsilon-ProVe.
Our approach exploits a controllable underestimation of the output reachable sets obtained via statistical prediction of tolerance limits.
arXiv Detail & Related papers (2023-08-18T22:30:35Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Robust and Lossless Fingerprinting of Deep Neural Networks via Pooled
Membership Inference [17.881686153284267]
Deep neural networks (DNNs) have already achieved great success in a lot of application areas and brought profound changes to our society.
How to protect the intellectual property (IP) of DNNs against infringement is one of the most important yet very challenging topics.
This paper proposes a novel technique called emphpooled membership inference (PMI) so as to protect the IP of the DNN models.
arXiv Detail & Related papers (2022-09-09T04:06:29Z) - RL-DistPrivacy: Privacy-Aware Distributed Deep Inference for low latency
IoT systems [41.1371349978643]
We present an approach that targets the security of collaborative deep inference via re-thinking the distribution strategy.
We formulate this methodology, as an optimization, where we establish a trade-off between the latency of co-inference and the privacy-level of data.
arXiv Detail & Related papers (2022-08-27T14:50:00Z) - Deep Serial Number: Computational Watermarking for DNN Intellectual
Property Protection [53.40245698216239]
DSN (Deep Serial Number) is a watermarking algorithm designed specifically for deep neural networks (DNNs)
Inspired by serial numbers in safeguarding conventional software IP, we propose the first implementation of serial number embedding within DNNs.
arXiv Detail & Related papers (2020-11-17T21:42:40Z) - DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution
Environments [37.84943219784536]
We present DarkneTZ, a framework that uses an edge device's Trusted Execution Environment (TEE) to limit the attack surface against Deep Neural Networks (DNNs)
We evaluate the performance of DarkneTZ, including CPU execution time, memory usage, and accurate power consumption.
Our results show that even if a single layer is hidden, we can provide reliable model privacy and defend against state of the art MIAs, with only 3% performance overhead.
arXiv Detail & Related papers (2020-04-12T21:42:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.