Training Private Models That Know What They Don't Know
- URL: http://arxiv.org/abs/2305.18393v1
- Date: Sun, 28 May 2023 12:20:07 GMT
- Title: Training Private Models That Know What They Don't Know
- Authors: Stephan Rabanser, Anvith Thudi, Abhradeep Thakurta, Krishnamurthy
Dvijotham, Nicolas Papernot
- Abstract summary: We find that several popular selective prediction approaches are ineffective in a differentially private setting.
We propose a novel evaluation mechanism which isolate selective prediction performance across model utility levels.
- Score: 40.19666295972155
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training reliable deep learning models which avoid making overconfident but
incorrect predictions is a longstanding challenge. This challenge is further
exacerbated when learning has to be differentially private: protection provided
to sensitive data comes at the price of injecting additional randomness into
the learning process. In this work, we conduct a thorough empirical
investigation of selective classifiers -- that can abstain when they are unsure
-- under a differential privacy constraint. We find that several popular
selective prediction approaches are ineffective in a differentially private
setting as they increase the risk of privacy leakage. At the same time, we
identify that a recent approach that only uses checkpoints produced by an
off-the-shelf private learning algorithm stands out as particularly suitable
under DP. Further, we show that differential privacy does not just harm utility
but also degrades selective classification performance. To analyze this effect
across privacy levels, we propose a novel evaluation mechanism which isolate
selective prediction performance across model utility levels. Our experimental
results show that recovering the performance level attainable by non-private
models is possible but comes at a considerable coverage cost as the privacy
budget decreases.
Related papers
- Causal Inference with Differentially Private (Clustered) Outcomes [16.166525280886578]
Estimating causal effects from randomized experiments is only feasible if participants agree to reveal their responses.
We suggest a new differential privacy mechanism, Cluster-DP, which leverages any given cluster structure.
We show that, depending on an intuitive measure of cluster quality, we can improve the variance loss while maintaining our privacy guarantees.
arXiv Detail & Related papers (2023-08-02T05:51:57Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - On the Statistical Complexity of Estimation and Testing under Privacy Constraints [17.04261371990489]
We show how to characterize the power of a statistical test under differential privacy in a plug-and-play fashion.
We show that maintaining privacy results in a noticeable reduction in performance only when the level of privacy protection is very high.
Finally, we demonstrate that the DP-SGLD algorithm, a private convex solver, can be employed for maximum likelihood estimation with a high degree of confidence.
arXiv Detail & Related papers (2022-10-05T12:55:53Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Debugging Differential Privacy: A Case Study for Privacy Auditing [60.87570714269048]
We show that auditing can also be used to find flaws in (purportedly) differentially private schemes.
In this case study, we audit a recent open source implementation of a differentially private deep learning algorithm and find, with 99.99999999% confidence, that the implementation does not satisfy the claimed differential privacy guarantee.
arXiv Detail & Related papers (2022-02-24T17:31:08Z) - Gradient Masking and the Underestimated Robustness Threats of
Differential Privacy in Deep Learning [0.0]
This paper experimentally evaluates the impact of training with Differential Privacy (DP) on model vulnerability against a broad range of adversarial attacks.
The results suggest that private models are less robust than their non-private counterparts, and that adversarial examples transfer better among DP models than between non-private and private ones.
arXiv Detail & Related papers (2021-05-17T16:10:54Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Chasing Your Long Tails: Differentially Private Prediction in Health
Care Settings [34.26542589537452]
Methods for differentially private (DP) learning provide a general-purpose approach to learn models with privacy guarantees.
Modern methods for DP learning ensure privacy through mechanisms that censor information judged as too unique.
We use state-of-the-art methods for DP learning to train privacy-preserving models in clinical prediction tasks.
arXiv Detail & Related papers (2020-10-13T19:56:37Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - Tempered Sigmoid Activations for Deep Learning with Differential Privacy [33.574715000662316]
We show that the choice of activation function is central to bounding the sensitivity of privacy-preserving deep learning.
We achieve new state-of-the-art accuracy on MNIST, FashionMNIST, and CIFAR10 without any modification of the learning procedure fundamentals.
arXiv Detail & Related papers (2020-07-28T13:19:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.