DiVa: An Accelerator for Differentially Private Machine Learning
- URL: http://arxiv.org/abs/2208.12392v1
- Date: Fri, 26 Aug 2022 01:19:56 GMT
- Title: DiVa: An Accelerator for Differentially Private Machine Learning
- Authors: Beomsik Park, Ranggi Hwang, Dongho Yoon, Yoonhyuk Choi, Minsoo Rhu
- Abstract summary: Differential privacy (DP) is rapidly gaining momentum in the industry as a practical standard for privacy protection.
We conduct a detailed workload characterization on a state-of-the-art differentially private ML training algorithm named DP-SGD.
Based on our analysis, we propose an accelerator for differentially private ML named DiVa, which provides a significant improvement in compute utilization.
- Score: 1.054627611890905
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The widespread deployment of machine learning (ML) is raising serious
concerns on protecting the privacy of users who contributed to the collection
of training data. Differential privacy (DP) is rapidly gaining momentum in the
industry as a practical standard for privacy protection. Despite DP's
importance, however, little has been explored within the computer systems
community regarding the implication of this emerging ML algorithm on system
designs. In this work, we conduct a detailed workload characterization on a
state-of-the-art differentially private ML training algorithm named DP-SGD. We
uncover several unique properties of DP-SGD (e.g., its high memory capacity and
computation requirements vs. non-private ML), root-causing its key bottlenecks.
Based on our analysis, we propose an accelerator for differentially private ML
named DiVa, which provides a significant improvement in compute utilization,
leading to 2.6x higher energy-efficiency vs. conventional systolic arrays.
Related papers
- LazyDP: Co-Designing Algorithm-Software for Scalable Training of Differentially Private Recommendation Models [8.92538797216985]
We present our characterization of private RecSys training using DP-SGD, root-causing its several performance bottlenecks.
We propose LazyDP, an algorithm-software co-design that addresses the compute and memory challenges of training RecSys with DP-SGD.
Compared to a state-of-the-art DP-SGD training system, we demonstrate that LazyDP provides an average 119x training throughput improvement.
arXiv Detail & Related papers (2024-04-12T23:32:06Z) - Sparsity-Preserving Differentially Private Training of Large Embedding
Models [67.29926605156788]
DP-SGD is a training algorithm that combines differential privacy with gradient descent.
Applying DP-SGD naively to embedding models can destroy gradient sparsity, leading to reduced training efficiency.
We present two new algorithms, DP-FEST and DP-AdaFEST, that preserve gradient sparsity during private training of large embedding models.
arXiv Detail & Related papers (2023-11-14T17:59:51Z) - DPMLBench: Holistic Evaluation of Differentially Private Machine
Learning [8.568872924668662]
Many studies have recently proposed improved algorithms based on DP-SGD to mitigate utility loss.
More importantly, there is a lack of comprehensive research to compare improvements in these DPML algorithms across utility, defensive capabilities, and generalizability.
We fill this gap by performing a holistic measurement of improved DPML algorithms on utility and defense capability against membership inference attacks (MIAs) on image classification tasks.
arXiv Detail & Related papers (2023-05-10T05:08:36Z) - DPIS: An Enhanced Mechanism for Differentially Private SGD with
Importance Sampling [19.59757201902467]
differential privacy (DP) has become a well-accepted standard for privacy protection, and deep neural networks (DNN) have been immensely successful in machine learning.
A classic mechanism for this purpose is DP-SGD, which is a differentially private version of the gradient descent (SGD) commonly used for training.
We propose DPIS, a novel mechanism for differentially private SGD training that can be used as a drop-in replacement of the core of DP-SGD.
arXiv Detail & Related papers (2022-10-18T07:03:14Z) - Large Scale Transfer Learning for Differentially Private Image
Classification [51.10365553035979]
Differential Privacy (DP) provides a formal framework for training machine learning models with individual example level privacy.
Private training using DP-SGD protects against leakage by injecting noise into individual example gradients.
While this result is quite appealing, the computational cost of training large-scale models with DP-SGD is substantially higher than non-private training.
arXiv Detail & Related papers (2022-05-06T01:22:20Z) - Differentially Private Reinforcement Learning with Linear Function
Approximation [3.42658286826597]
We study regret minimization in finite-horizon Markov decision processes (MDPs) under the constraints of differential privacy (DP)
Our results are achieved via a general procedure for learning in linear mixture MDPs under changing regularizers.
arXiv Detail & Related papers (2022-01-18T15:25:24Z) - Distributed Reinforcement Learning for Privacy-Preserving Dynamic Edge
Caching [91.50631418179331]
A privacy-preserving distributed deep policy gradient (P2D3PG) is proposed to maximize the cache hit rates of devices in the MEC networks.
We convert the distributed optimizations into model-free Markov decision process problems and then introduce a privacy-preserving federated learning method for popularity prediction.
arXiv Detail & Related papers (2021-10-20T02:48:27Z) - Large Language Models Can Be Strong Differentially Private Learners [70.0317718115406]
Differentially Private (DP) learning has seen limited success for building large deep learning models of text.
We show that this performance drop can be mitigated with the use of large pretrained models.
We propose a memory saving technique that allows clipping in DP-SGD to run without instantiating per-example gradients.
arXiv Detail & Related papers (2021-10-12T01:45:27Z) - Sensitivity analysis in differentially private machine learning using
hybrid automatic differentiation [54.88777449903538]
We introduce a novel textithybrid automatic differentiation (AD) system for sensitivity analysis.
This enables modelling the sensitivity of arbitrary differentiable function compositions, such as the training of neural networks on private data.
Our approach can enable the principled reasoning about privacy loss in the setting of data processing.
arXiv Detail & Related papers (2021-07-09T07:19:23Z) - Fast and Memory Efficient Differentially Private-SGD via JL Projections [29.37156662314245]
DP-SGD is the only known algorithm for private training of large scale neural networks.
We present a new framework to design differentially privates called DP-SGD-JL and DP-Adam-JL.
arXiv Detail & Related papers (2021-02-05T06:02:10Z) - A One-Pass Private Sketch for Most Machine Learning Tasks [48.17461258268463]
Differential privacy (DP) is a compelling privacy definition that explains the privacy-utility tradeoff via formal, provable guarantees.
We propose a private sketch that supports a multitude of machine learning tasks including regression, classification, density estimation, and more.
Our sketch consists of randomized contingency tables that are indexed with locality-sensitive hashing and constructed with an efficient one-pass algorithm.
arXiv Detail & Related papers (2020-06-16T17:47:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.