DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution
Environments
- URL: http://arxiv.org/abs/2004.05703v1
- Date: Sun, 12 Apr 2020 21:42:03 GMT
- Title: DarkneTZ: Towards Model Privacy at the Edge using Trusted Execution
Environments
- Authors: Fan Mo, Ali Shahin Shamsabadi, Kleomenis Katevas, Soteris Demetriou,
Ilias Leontiadis, Andrea Cavallaro, Hamed Haddadi
- Abstract summary: We present DarkneTZ, a framework that uses an edge device's Trusted Execution Environment (TEE) to limit the attack surface against Deep Neural Networks (DNNs)
We evaluate the performance of DarkneTZ, including CPU execution time, memory usage, and accurate power consumption.
Our results show that even if a single layer is hidden, we can provide reliable model privacy and defend against state of the art MIAs, with only 3% performance overhead.
- Score: 37.84943219784536
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present DarkneTZ, a framework that uses an edge device's Trusted Execution
Environment (TEE) in conjunction with model partitioning to limit the attack
surface against Deep Neural Networks (DNNs). Increasingly, edge devices
(smartphones and consumer IoT devices) are equipped with pre-trained DNNs for a
variety of applications. This trend comes with privacy risks as models can leak
information about their training data through effective membership inference
attacks (MIAs). We evaluate the performance of DarkneTZ, including CPU
execution time, memory usage, and accurate power consumption, using two small
and six large image classification models. Due to the limited memory of the
edge device's TEE, we partition model layers into more sensitive layers (to be
executed inside the device TEE), and a set of layers to be executed in the
untrusted part of the operating system. Our results show that even if a single
layer is hidden, we can provide reliable model privacy and defend against state
of the art MIAs, with only 3% performance overhead. When fully utilizing the
TEE, DarkneTZ provides model protections with up to 10% overhead.
Related papers
- TEESlice: Protecting Sensitive Neural Network Models in Trusted Execution Environments When Attackers have Pre-Trained Models [12.253529209143197]
TSDP is a method that protects privacy-sensitive weights within TEEs and offloads insensitive weights to GPUs.
We introduce a novel partition before training strategy, which effectively separates privacy-sensitive weights from other components of the model.
Our evaluation demonstrates that our approach can offer full model protection with a computational cost reduced by a factor of 10.
arXiv Detail & Related papers (2024-11-15T04:52:11Z) - Privacy preserving layer partitioning for Deep Neural Network models [0.21470800327528838]
Trusted Execution Environments (TEEs) can introduce significant performance overhead due to additional layers of encryption, decryption, security and integrity checks.
We introduce layer partitioning technique and offloading computations to GPU.
We conduct experiments to demonstrate the effectiveness of our approach in protecting against input reconstruction attacks developed using trained conditional Generative Adversarial Network(c-GAN)
arXiv Detail & Related papers (2024-04-11T02:39:48Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Memory-Efficient and Secure DNN Inference on TrustZone-enabled Consumer IoT Devices [9.928745904761358]
Edge intelligence enables resource-demanding Deep Neural Network (DNN) inference without transferring original data.
For privacy-sensitive applications, deploying models in hardware-isolated trusted execution environments (TEEs) becomes essential.
We present a novel approach for advanced model deployment in TrustZone that ensures comprehensive privacy preservation during model inference.
arXiv Detail & Related papers (2024-03-19T09:22:50Z) - No Privacy Left Outside: On the (In-)Security of TEE-Shielded DNN
Partition for On-Device ML [28.392497220631032]
We show that existing TSDP solutions are vulnerable to privacy-stealing attacks and are not as safe as commonly believed.
We present TEESlice, a novel TSDP method that defends against MS and MIA during DNN inference.
arXiv Detail & Related papers (2023-10-11T02:54:52Z) - Shielding the Unseen: Privacy Protection through Poisoning NeRF with
Spatial Deformation [59.302770084115814]
We introduce an innovative method of safeguarding user privacy against the generative capabilities of Neural Radiance Fields (NeRF) models.
Our novel poisoning attack method induces changes to observed views that are imperceptible to the human eye, yet potent enough to disrupt NeRF's ability to accurately reconstruct a 3D scene.
We extensively test our approach on two common NeRF benchmark datasets consisting of 29 real-world scenes with high-quality images.
arXiv Detail & Related papers (2023-10-04T19:35:56Z) - Shielding Federated Learning Systems against Inference Attacks with ARM
TrustZone [0.0]
Federated Learning (FL) opens new perspectives for training machine learning models while keeping personal data on the users premises.
The long list of inference attacks that leak private data from gradients, published in the recent years, have emphasized the need of devising effective protection mechanisms.
We present GradSec, a solution that allows protecting in a TEE only sensitive layers of a machine learning model.
arXiv Detail & Related papers (2022-08-11T15:53:07Z) - Few-Shot Backdoor Attacks on Visual Object Tracking [80.13936562708426]
Visual object tracking (VOT) has been widely adopted in mission-critical applications, such as autonomous driving and intelligent surveillance systems.
We show that an adversary can easily implant hidden backdoors into VOT models by tempering with the training process.
We show that our attack is resistant to potential defenses, highlighting the vulnerability of VOT models to potential backdoor attacks.
arXiv Detail & Related papers (2022-01-31T12:38:58Z) - Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises [87.53808756910452]
A cooling-shrinking attack method is proposed to deceive state-of-the-art SiameseRPN-based trackers.
Our method has good transferability and is able to deceive other top-performance trackers such as DaSiamRPN, DaSiamRPN-UpdateNet, and DiMP.
arXiv Detail & Related papers (2020-03-21T07:13:40Z) - A Privacy-Preserving-Oriented DNN Pruning and Mobile Acceleration
Framework [56.57225686288006]
Weight pruning of deep neural networks (DNNs) has been proposed to satisfy the limited storage and computing capability of mobile edge devices.
Previous pruning methods mainly focus on reducing the model size and/or improving performance without considering the privacy of user data.
We propose a privacy-preserving-oriented pruning and mobile acceleration framework that does not require the private training dataset.
arXiv Detail & Related papers (2020-03-13T23:52:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.