DarKnight: An Accelerated Framework for Privacy and Integrity Preserving
Deep Learning Using Trusted Hardware
- URL: http://arxiv.org/abs/2207.00083v1
- Date: Thu, 30 Jun 2022 19:58:36 GMT
- Title: DarKnight: An Accelerated Framework for Privacy and Integrity Preserving
Deep Learning Using Trusted Hardware
- Authors: Hanieh Hashemi and Yongqin Wang and Murali Annavaram
- Abstract summary: DarKnight is a framework for large DNN training while protecting input privacy and integrity.
DarKnight relies on cooperative execution between trusted execution environments (TEE) and accelerators.
DarKnight's data obfuscation strategy provides provable data privacy and computation integrity in the cloud servers.
- Score: 3.1853566662905943
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy and security-related concerns are growing as machine learning reaches
diverse application domains. The data holders want to train or infer with
private data while exploiting accelerators, such as GPUs, that are hosted in
the cloud. Cloud systems are vulnerable to attackers that compromise the
privacy of data and integrity of computations. Tackling such a challenge
requires unifying theoretical privacy algorithms with hardware security
capabilities. This paper presents DarKnight, a framework for large DNN training
while protecting input privacy and computation integrity. DarKnight relies on
cooperative execution between trusted execution environments (TEE) and
accelerators, where the TEE provides privacy and integrity verification, while
accelerators perform the bulk of the linear algebraic computation to optimize
the performance. In particular, DarKnight uses a customized data encoding
strategy based on matrix masking to create input obfuscation within a TEE. The
obfuscated data is then offloaded to GPUs for fast linear algebraic
computation. DarKnight's data obfuscation strategy provides provable data
privacy and computation integrity in the cloud servers. While prior works
tackle inference privacy and cannot be utilized for training, DarKnight's
encoding scheme is designed to support both training and inference.
Related papers
- DataSeal: Ensuring the Verifiability of Private Computation on Encrypted Data [14.21750921409931]
We introduce DataSeal, which combines the low overhead of the algorithm-based fault tolerance (ABFT) technique with the confidentiality of Fully Homomorphic Encryption (FHE)
DataSeal achieves much lower overheads for providing computation verifiability for FHE than other techniques that include MAC, ZKP, and TEE.
arXiv Detail & Related papers (2024-10-19T21:19:39Z) - Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - Privacy preserving layer partitioning for Deep Neural Network models [0.21470800327528838]
Trusted Execution Environments (TEEs) can introduce significant performance overhead due to additional layers of encryption, decryption, security and integrity checks.
We introduce layer partitioning technique and offloading computations to GPU.
We conduct experiments to demonstrate the effectiveness of our approach in protecting against input reconstruction attacks developed using trained conditional Generative Adversarial Network(c-GAN)
arXiv Detail & Related papers (2024-04-11T02:39:48Z) - Tempo: Confidentiality Preservation in Cloud-Based Neural Network
Training [8.187538747666203]
Cloud deep learning platforms provide cost-effective deep neural network (DNN) training for customers who lack computation resources.
Recently, researchers have sought to protect data privacy in deep learning by leveraging CPU trusted execution environments (TEEs)
This paper presents Tempo, the first cloud-based deep learning system that cooperates with TEE and distributed GPU.
arXiv Detail & Related papers (2024-01-21T15:57:04Z) - Pre-trained Encoders in Self-Supervised Learning Improve Secure and
Privacy-preserving Supervised Learning [63.45532264721498]
Self-supervised learning is an emerging technique to pre-train encoders using unlabeled data.
We perform first systematic, principled measurement study to understand whether and when a pretrained encoder can address the limitations of secure or privacy-preserving supervised learning algorithms.
arXiv Detail & Related papers (2022-12-06T21:35:35Z) - THE-X: Privacy-Preserving Transformer Inference with Homomorphic
Encryption [112.02441503951297]
Privacy-preserving inference of transformer models is on the demand of cloud service users.
We introduce $textitTHE-X$, an approximation approach for transformers, which enables privacy-preserving inference of pre-trained models.
arXiv Detail & Related papers (2022-06-01T03:49:18Z) - Reinforcement Learning on Encrypted Data [58.39270571778521]
We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
arXiv Detail & Related papers (2021-09-16T21:59:37Z) - Privacy and Integrity Preserving Training Using Trusted Hardware [4.5843599120944605]
DarKnight is a framework for large computation training while protecting input privacy and integrity.
DarKnight relies on cooperative execution between trusted execution environments (TEE) and accelerators.
arXiv Detail & Related papers (2021-05-01T19:33:28Z) - Faster Secure Data Mining via Distributed Homomorphic Encryption [108.77460689459247]
Homomorphic Encryption (HE) is receiving more and more attention recently for its capability to do computations over the encrypted field.
We propose a novel general distributed HE-based data mining framework towards one step of solving the scaling problem.
We verify the efficiency and effectiveness of our new framework by testing over various data mining algorithms and benchmark data-sets.
arXiv Detail & Related papers (2020-06-17T18:14:30Z) - Prive-HD: Privacy-Preserved Hyperdimensional Computing [18.512391787497673]
Hyperdimensional (HD) computing is gaining traction due to its light-weight computation and robustness.
We present an accuracy-privacy trade-off method to realize a differentially private model and to obfuscate the information sent for cloud-hosted inference.
arXiv Detail & Related papers (2020-05-14T04:19:34Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.