Packing Privacy Budget Efficiently
- URL: http://arxiv.org/abs/2212.13228v1
- Date: Mon, 26 Dec 2022 17:25:02 GMT
- Title: Packing Privacy Budget Efficiently
- Authors: Pierre Tholoniat, Kelly Kostopoulou, Mosharaf Chowdhury, Asaf Cidon,
Roxana Geambasu, Mathias L\'ecuyer, Junfeng Yang
- Abstract summary: differential privacy (DP) provides a rigorous way to bound that leakage under a given budget.
This DP budget can be regarded as a new type of compute resource in workloads of multiple ML models training on user data.
We formulate privacy scheduling as a new type of multidimensional knapsack problem, called privacy knapsack, which maximizes DP budget efficiency.
- Score: 10.51351125953885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning (ML) models can leak information about users, and
differential privacy (DP) provides a rigorous way to bound that leakage under a
given budget. This DP budget can be regarded as a new type of compute resource
in workloads of multiple ML models training on user data. Once it is used, the
DP budget is forever consumed. Therefore, it is crucial to allocate it most
efficiently to train as many models as possible. This paper presents the
scheduler for privacy that optimizes for efficiency. We formulate privacy
scheduling as a new type of multidimensional knapsack problem, called privacy
knapsack, which maximizes DP budget efficiency. We show that privacy knapsack
is NP-hard, hence practical algorithms are necessarily approximate. We develop
an approximation algorithm for privacy knapsack, DPK, and evaluate it on
microbenchmarks and on a new, synthetic private-ML workload we developed from
the Alibaba ML cluster trace. We show that DPK: (1) often approaches the
efficiency-optimal schedule, (2) consistently schedules more tasks compared to
a state-of-the-art privacy scheduling algorithm that focused on fairness
(1.3-1.7x in Alibaba, 1.0-2.6x in microbenchmarks), but (3) sacrifices some
level of fairness for efficiency. Therefore, using DPK, DP ML operators should
be able to train more models on the same amount of user data while offering the
same privacy guarantee to their users.
Related papers
- Private Fine-tuning of Large Language Models with Zeroth-order
Optimization [54.24600476755372]
We introduce DP-ZO, a new method for fine-tuning large language models that preserves the privacy of training data by privatizing zeroth-order optimization.
We show that DP-ZO exhibits just $1.86%$ performance degradation due to privacy at $ (1,10-5)$-DP when fine-tuning OPT-66B on 1000 training samples from SQuAD.
arXiv Detail & Related papers (2024-01-09T03:53:59Z) - Optimal Differentially Private Model Training with Public Data [14.382649652412322]
Differential privacy (DP) ensures that training a machine learning model does not leak private data.
In practice, we may have access to auxiliary public data that is free of privacy concerns.
arXiv Detail & Related papers (2023-06-26T20:40:29Z) - TAN Without a Burn: Scaling Laws of DP-SGD [70.7364032297978]
Differentially Private methods for training Deep Neural Networks (DNNs) have progressed recently.
We decouple privacy analysis and experimental behavior of noisy training to explore the trade-off with minimal computational requirements.
We apply the proposed method on CIFAR-10 and ImageNet and, in particular, strongly improve the state-of-the-art on ImageNet with a +9 points gain in top-1 accuracy.
arXiv Detail & Related papers (2022-10-07T08:44:35Z) - Large Scale Transfer Learning for Differentially Private Image
Classification [51.10365553035979]
Differential Privacy (DP) provides a formal framework for training machine learning models with individual example level privacy.
Private training using DP-SGD protects against leakage by injecting noise into individual example gradients.
While this result is quite appealing, the computational cost of training large-scale models with DP-SGD is substantially higher than non-private training.
arXiv Detail & Related papers (2022-05-06T01:22:20Z) - Production of Categorical Data Verifying Differential Privacy:
Conception and Applications to Machine Learning [0.0]
Differential privacy is a formal definition that allows quantifying the privacy-utility trade-off.
With the local DP (LDP) model, users can sanitize their data locally before transmitting it to the server.
In all cases, we concluded that differentially private ML models achieve nearly the same utility metrics as non-private ones.
arXiv Detail & Related papers (2022-04-02T12:50:14Z) - Privacy Budget Scheduling [3.5329693371326822]
ML models trained on personal data have been shown to leak information about users.
Differential privacy (DP) enables model training with a guaranteed bound on this leakage.
We describe PrivateKube, an extension to the popular datacenter orchestrator.
arXiv Detail & Related papers (2021-06-29T12:43:47Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - A One-Pass Private Sketch for Most Machine Learning Tasks [48.17461258268463]
Differential privacy (DP) is a compelling privacy definition that explains the privacy-utility tradeoff via formal, provable guarantees.
We propose a private sketch that supports a multitude of machine learning tasks including regression, classification, density estimation, and more.
Our sketch consists of randomized contingency tables that are indexed with locality-sensitive hashing and constructed with an efficient one-pass algorithm.
arXiv Detail & Related papers (2020-06-16T17:47:48Z) - User-Level Privacy-Preserving Federated Learning: Analysis and
Performance Optimization [77.43075255745389]
Federated learning (FL) is capable of preserving private data from mobile terminals (MTs) while training the data into useful models.
From a viewpoint of information theory, it is still possible for a curious server to infer private information from the shared models uploaded by MTs.
We propose a user-level differential privacy (UDP) algorithm by adding artificial noise to the shared models before uploading them to servers.
arXiv Detail & Related papers (2020-02-29T10:13:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.