Privacy and Integrity Preserving Training Using Trusted Hardware
- URL: http://arxiv.org/abs/2105.00334v1
- Date: Sat, 1 May 2021 19:33:28 GMT
- Title: Privacy and Integrity Preserving Training Using Trusted Hardware
- Authors: Hanieh Hashemi, Yongqin Wang, Murali Annavaram
- Abstract summary: DarKnight is a framework for large computation training while protecting input privacy and integrity.
DarKnight relies on cooperative execution between trusted execution environments (TEE) and accelerators.
- Score: 4.5843599120944605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy and security-related concerns are growing as machine learning reaches
diverse application domains. The data holders want to train with private data
while exploiting accelerators, such as GPUs, that are hosted in the cloud.
However, Cloud systems are vulnerable to attackers that compromise the privacy
of data and integrity of computations. This work presents DarKnight, a
framework for large DNN training while protecting input privacy and computation
integrity. DarKnight relies on cooperative execution between trusted execution
environments (TEE) and accelerators, where the TEE provides privacy and
integrity verification, while accelerators perform the computation heavy linear
algebraic operations.
Related papers
- Lancelot: Towards Efficient and Privacy-Preserving Byzantine-Robust Federated Learning within Fully Homomorphic Encryption [10.685816010576918]
We propose Lancelot, an innovative and computationally efficient BRFL framework that employs fully homomorphic encryption (FHE) to safeguard against malicious client activities while preserving data privacy.
Our extensive testing, which includes medical imaging diagnostics and widely-used public image datasets, demonstrates that Lancelot significantly outperforms existing methods, offering more than a twenty-fold increase in processing speed, all while maintaining data privacy.
arXiv Detail & Related papers (2024-08-12T14:48:25Z) - Ungeneralizable Examples [70.76487163068109]
Current approaches to creating unlearnable data involve incorporating small, specially designed noises.
We extend the concept of unlearnable data to conditional data learnability and introduce textbfUntextbfGeneralizable textbfExamples (UGEs)
UGEs exhibit learnability for authorized users while maintaining unlearnability for potential hackers.
arXiv Detail & Related papers (2024-04-22T09:29:14Z) - Tempo: Confidentiality Preservation in Cloud-Based Neural Network
Training [8.187538747666203]
Cloud deep learning platforms provide cost-effective deep neural network (DNN) training for customers who lack computation resources.
Recently, researchers have sought to protect data privacy in deep learning by leveraging CPU trusted execution environments (TEEs)
This paper presents Tempo, the first cloud-based deep learning system that cooperates with TEE and distributed GPU.
arXiv Detail & Related papers (2024-01-21T15:57:04Z) - HasTEE+ : Confidential Cloud Computing and Analytics with Haskell [50.994023665559496]
Confidential computing enables the protection of confidential code and data in a co-tenanted cloud deployment using specialized hardware isolation units called Trusted Execution Environments (TEEs)
TEEs offer low-level C/C++-based toolchains that are susceptible to inherent memory safety vulnerabilities and lack language constructs to monitor explicit and implicit information-flow leaks.
We address the above with HasTEE+, a domain-specific language (cla) embedded in Haskell that enables programming TEEs in a high-level language with strong type-safety.
arXiv Detail & Related papers (2024-01-17T00:56:23Z) - Position: Considerations for Differentially Private Learning with Large-Scale Public Pretraining [75.25943383604266]
We question whether the use of large Web-scraped datasets should be viewed as differential-privacy-preserving.
We caution that publicizing these models pretrained on Web data as "private" could lead to harm and erode the public's trust in differential privacy as a meaningful definition of privacy.
We conclude by discussing potential paths forward for the field of private learning, as public pretraining becomes more popular and powerful.
arXiv Detail & Related papers (2022-12-13T10:41:12Z) - DarKnight: An Accelerated Framework for Privacy and Integrity Preserving
Deep Learning Using Trusted Hardware [3.1853566662905943]
DarKnight is a framework for large DNN training while protecting input privacy and integrity.
DarKnight relies on cooperative execution between trusted execution environments (TEE) and accelerators.
DarKnight's data obfuscation strategy provides provable data privacy and computation integrity in the cloud servers.
arXiv Detail & Related papers (2022-06-30T19:58:36Z) - Mitigating Leakage from Data Dependent Communications in Decentralized
Computing using Differential Privacy [1.911678487931003]
We propose a general execution model to control the data-dependence of communications in user-side decentralized computations.
Our formal privacy guarantees leverage and extend recent results on privacy amplification by shuffling.
arXiv Detail & Related papers (2021-12-23T08:30:17Z) - Reinforcement Learning on Encrypted Data [58.39270571778521]
We present a preliminary, experimental study of how a DQN agent trained on encrypted states performs in environments with discrete and continuous state spaces.
Our results highlight that the agent is still capable of learning in small state spaces even in presence of non-deterministic encryption, but performance collapses in more complex environments.
arXiv Detail & Related papers (2021-09-16T21:59:37Z) - Toward Smart Security Enhancement of Federated Learning Networks [109.20054130698797]
In this paper, we review the vulnerabilities of federated learning networks (FLNs) and give an overview of poisoning attacks.
We present a smart security enhancement framework for FLNs.
Deep reinforcement learning is applied to learn the behaving patterns of the edge devices (EDs) that can provide benign training results.
arXiv Detail & Related papers (2020-08-19T08:46:39Z) - Prive-HD: Privacy-Preserved Hyperdimensional Computing [18.512391787497673]
Hyperdimensional (HD) computing is gaining traction due to its light-weight computation and robustness.
We present an accuracy-privacy trade-off method to realize a differentially private model and to obfuscate the information sent for cloud-hosted inference.
arXiv Detail & Related papers (2020-05-14T04:19:34Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.