Differentially Private and Adversarially Robust Machine Learning: An
Empirical Evaluation
- URL: http://arxiv.org/abs/2401.10405v1
- Date: Thu, 18 Jan 2024 22:26:31 GMT
- Title: Differentially Private and Adversarially Robust Machine Learning: An
Empirical Evaluation
- Authors: Janvi Thakkar, Giulio Zizzo, Sergio Maffeis
- Abstract summary: Malicious adversaries can attack machine learning models to infer sensitive information or damage the system by launching a series of evasion attacks.
This study explores the combination of adversarial training and differentially private training to defend against simultaneous attacks.
- Score: 2.8084422332394428
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Malicious adversaries can attack machine learning models to infer sensitive
information or damage the system by launching a series of evasion attacks.
Although various work addresses privacy and security concerns, they focus on
individual defenses, but in practice, models may undergo simultaneous attacks.
This study explores the combination of adversarial training and differentially
private training to defend against simultaneous attacks. While
differentially-private adversarial training, as presented in DP-Adv,
outperforms the other state-of-the-art methods in performance, it lacks formal
privacy guarantees and empirical validation. Thus, in this work, we benchmark
the performance of this technique using a membership inference attack and
empirically show that the resulting approach is as private as non-robust
private models. This work also highlights the need to explore privacy
guarantees in dynamic training paradigms.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Students Parrot Their Teachers: Membership Inference on Model
Distillation [54.392069096234074]
We study the privacy provided by knowledge distillation to both the teacher and student training sets.
Our attacks are strongest when student and teacher sets are similar, or when the attacker can poison the teacher set.
arXiv Detail & Related papers (2023-03-06T19:16:23Z) - Protecting Split Learning by Potential Energy Loss [70.81375125791979]
We focus on the privacy leakage from the forward embeddings of split learning.
We propose the potential energy loss to make the forward embeddings become more 'complicated'
arXiv Detail & Related papers (2022-10-18T06:21:11Z) - Truth Serum: Poisoning Machine Learning Models to Reveal Their Secrets [53.866927712193416]
We show that an adversary who can poison a training dataset can cause models trained on this dataset to leak private details belonging to other parties.
Our attacks are effective across membership inference, attribute inference, and data extraction.
Our results cast doubts on the relevance of cryptographic privacy guarantees in multiparty protocols for machine learning.
arXiv Detail & Related papers (2022-03-31T18:06:28Z) - One Parameter Defense -- Defending against Data Inference Attacks via
Differential Privacy [26.000487178636927]
Machine learning models are vulnerable to data inference attacks, such as membership inference and model inversion attacks.
Most existing defense methods only protect against membership inference attacks.
We propose a differentially private defense method that handles both types of attacks in a time-efficient manner.
arXiv Detail & Related papers (2022-03-13T06:06:24Z) - Defending against Reconstruction Attacks with R\'enyi Differential
Privacy [72.1188520352079]
Reconstruction attacks allow an adversary to regenerate data samples of the training set using access to only a trained model.
Differential privacy is a known solution to such attacks, but is often used with a relatively large privacy budget.
We show that, for a same mechanism, we can derive privacy guarantees for reconstruction attacks that are better than the traditional ones from the literature.
arXiv Detail & Related papers (2022-02-15T18:09:30Z) - Gradient Masking and the Underestimated Robustness Threats of
Differential Privacy in Deep Learning [0.0]
This paper experimentally evaluates the impact of training with Differential Privacy (DP) on model vulnerability against a broad range of adversarial attacks.
The results suggest that private models are less robust than their non-private counterparts, and that adversarial examples transfer better among DP models than between non-private and private ones.
arXiv Detail & Related papers (2021-05-17T16:10:54Z) - Federated Learning in Adversarial Settings [0.8701566919381224]
Federated learning scheme provides different trade-offs between robustness, privacy, bandwidth efficiency, and model accuracy.
We show that this extension performs as efficiently as the non-private but robust scheme, even with stringent privacy requirements.
This suggests a possible fundamental trade-off between Differential Privacy and robustness.
arXiv Detail & Related papers (2020-10-15T14:57:02Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.