Tempo: Confidentiality Preservation in Cloud-Based Neural Network
Training
- URL: http://arxiv.org/abs/2401.11531v1
- Date: Sun, 21 Jan 2024 15:57:04 GMT
- Title: Tempo: Confidentiality Preservation in Cloud-Based Neural Network
Training
- Authors: Rongwu Xu and Zhixuan Fang
- Abstract summary: Cloud deep learning platforms provide cost-effective deep neural network (DNN) training for customers who lack computation resources.
Recently, researchers have sought to protect data privacy in deep learning by leveraging CPU trusted execution environments (TEEs)
This paper presents Tempo, the first cloud-based deep learning system that cooperates with TEE and distributed GPU.
- Score: 8.187538747666203
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Cloud deep learning platforms provide cost-effective deep neural network
(DNN) training for customers who lack computation resources. However, cloud
systems are often untrustworthy and vulnerable to attackers, leading to growing
concerns about model privacy. Recently, researchers have sought to protect data
privacy in deep learning by leveraging CPU trusted execution environments
(TEEs), which minimize the use of cryptography, but existing works failed to
simultaneously utilize the computational resources of GPUs to assist in
training and prevent model leakage. This paper presents Tempo, the first
cloud-based deep learning system that cooperates with TEE and distributed GPUs
for efficient DNN training with model confidentiality preserved. To tackle the
challenge of preserving privacy while offloading linear algebraic operations
from TEE to GPUs for efficient batch computation, we introduce a customized
permutation-based obfuscation algorithm to blind both inputs and model
parameters. An optimization mechanism that reduces encryption operations is
proposed for faster weight updates during backpropagation to speed up training.
We implement Tempo and evaluate it with both training and inference for two
prevalent DNNs. Empirical results indicate that Tempo outperforms baselines and
offers sufficient privacy protection.
Related papers
- SCoTTi: Save Computation at Training Time with an adaptive framework [7.780766187171572]
On-device training is an emerging approach in machine learning where models are trained on edge devices.
We propose SCoTTi (Save Computation at Training Time), an adaptive framework that addresses the challenge of reducing resource consumption during training.
Our proposed approach demonstrates superior performance compared to the state-of-the-art methods regarding computational resource savings on various commonly employed benchmarks.
arXiv Detail & Related papers (2023-12-19T16:19:33Z) - Temporal Patience: Efficient Adaptive Deep Learning for Embedded Radar
Data Processing [4.359030177348051]
This paper presents novel techniques that leverage the temporal correlation present in streaming radar data to enhance the efficiency of Early Exit Neural Networks for Deep Learning inference on embedded devices.
Our results demonstrate that our techniques save up to 26% of operations per inference over a Single Exit Network and 12% over a confidence-based Early Exit version.
Such efficiency gains enable real-time radar data processing on resource-constrained platforms, allowing for new applications in the context of smart homes, Internet-of-Things, and human-computer interaction.
arXiv Detail & Related papers (2023-09-11T12:38:01Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - DarKnight: An Accelerated Framework for Privacy and Integrity Preserving
Deep Learning Using Trusted Hardware [3.1853566662905943]
DarKnight is a framework for large DNN training while protecting input privacy and integrity.
DarKnight relies on cooperative execution between trusted execution environments (TEE) and accelerators.
DarKnight's data obfuscation strategy provides provable data privacy and computation integrity in the cloud servers.
arXiv Detail & Related papers (2022-06-30T19:58:36Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Privacy and Integrity Preserving Training Using Trusted Hardware [4.5843599120944605]
DarKnight is a framework for large computation training while protecting input privacy and integrity.
DarKnight relies on cooperative execution between trusted execution environments (TEE) and accelerators.
arXiv Detail & Related papers (2021-05-01T19:33:28Z) - NN-EMD: Efficiently Training Neural Networks using Encrypted
Multi-Sourced Datasets [7.067870969078555]
Training a machine learning model over an encrypted dataset is an existing promising approach to address the privacy-preserving machine learning task.
We propose a novel framework, NN-EMD, to train a deep neural network (DNN) model over multiple datasets collected from multiple sources.
We evaluate our framework for performance with regards to the training time and model accuracy on the MNIST datasets.
arXiv Detail & Related papers (2020-12-18T23:01:20Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z) - Deep Learning for Ultra-Reliable and Low-Latency Communications in 6G
Networks [84.2155885234293]
We first summarize how to apply data-driven supervised deep learning and deep reinforcement learning in URLLC.
To address these open problems, we develop a multi-level architecture that enables device intelligence, edge intelligence, and cloud intelligence for URLLC.
arXiv Detail & Related papers (2020-02-22T14:38:11Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.