Robustness Threats of Differential Privacy
- URL: http://arxiv.org/abs/2012.07828v1
- Date: Mon, 14 Dec 2020 18:59:24 GMT
- Title: Robustness Threats of Differential Privacy
- Authors: Nurislam Tursynbek, Aleksandr Petiushko, Ivan Oseledets
- Abstract summary: We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
- Score: 70.818129585404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differential privacy is a powerful and gold-standard concept of measuring and
guaranteeing privacy in data analysis. It is well-known that differential
privacy reduces the model's accuracy. However, it is unclear how it affects
security of the model from robustness point of view. In this paper, we
empirically observe an interesting trade-off between the differential privacy
and the security of neural networks. Standard neural networks are vulnerable to
input perturbations, either adversarial attacks or common corruptions. We
experimentally demonstrate that networks, trained with differential privacy, in
some settings might be even more vulnerable in comparison to non-private
versions. To explore this, we extensively study different robustness
measurements, including FGSM and PGD adversaries, distance to linear decision
boundaries, curvature profile, and performance on a corrupted dataset. Finally,
we study how the main ingredients of differentially private neural networks
training, such as gradient clipping and noise addition, affect (decrease and
increase) the robustness of the model.
Related papers
- Privacy-Preserving Hybrid Ensemble Model for Network Anomaly Detection: Balancing Security and Data Protection [6.5920909061458355]
We propose a hybrid ensemble model that incorporates privacy-preserving techniques to address both detection accuracy and data protection.
Our model combines the strengths of several machine learning algo- rithms, including K-Nearest Neighbors (KNN), Support Vector Machines (SVM), XGBoost, and Artificial Neural Networks (ANN)
arXiv Detail & Related papers (2025-02-13T06:33:16Z) - Differentially Private Random Feature Model [52.468511541184895]
We produce a differentially private random feature model for privacy-preserving kernel machines.
We show that our method preserves privacy and derive a generalization error bound for the method.
arXiv Detail & Related papers (2024-12-06T05:31:08Z) - Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.
We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.
We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Causal Inference with Differentially Private (Clustered) Outcomes [16.166525280886578]
Estimating causal effects from randomized experiments is only feasible if participants agree to reveal their responses.
We suggest a new differential privacy mechanism, Cluster-DP, which leverages any given cluster structure.
We show that, depending on an intuitive measure of cluster quality, we can improve the variance loss while maintaining our privacy guarantees.
arXiv Detail & Related papers (2023-08-02T05:51:57Z) - How Do Input Attributes Impact the Privacy Loss in Differential Privacy? [55.492422758737575]
We study the connection between the per-subject norm in DP neural networks and individual privacy loss.
We introduce a novel metric termed the Privacy Loss-Input Susceptibility (PLIS) which allows one to apportion the subject's privacy loss to their input attributes.
arXiv Detail & Related papers (2022-11-18T11:39:03Z) - A Differentially Private Framework for Deep Learning with Convexified
Loss Functions [4.059849656394191]
Differential privacy (DP) has been applied in deep learning for preserving privacy of the underlying training sets.
Existing DP practice falls into three categories - objective perturbation, gradient perturbation and output perturbation.
We propose a novel output perturbation framework by injecting DP noise into a randomly sampled neuron.
arXiv Detail & Related papers (2022-04-03T11:10:05Z) - Learning to be adversarially robust and differentially private [42.7930886063265]
We study the difficulties in learning that arise from robust and differentially private optimization.
Data dimensionality dependent term introduced by private optimization compounds difficulties of learning a robust model.
Size of adversarial generalization and clipping norm in differential privacy both increase the curvature of the loss landscape, implying poorer performance.
arXiv Detail & Related papers (2022-01-06T22:33:06Z) - Gradient Masking and the Underestimated Robustness Threats of
Differential Privacy in Deep Learning [0.0]
This paper experimentally evaluates the impact of training with Differential Privacy (DP) on model vulnerability against a broad range of adversarial attacks.
The results suggest that private models are less robust than their non-private counterparts, and that adversarial examples transfer better among DP models than between non-private and private ones.
arXiv Detail & Related papers (2021-05-17T16:10:54Z) - On Deep Learning with Label Differential Privacy [54.45348348861426]
We study the multi-class classification setting where the labels are considered sensitive and ought to be protected.
We propose a new algorithm for training deep neural networks with label differential privacy, and run evaluations on several datasets.
arXiv Detail & Related papers (2021-02-11T15:09:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.