Discriminative Adversarial Privacy: Balancing Accuracy and Membership
Privacy in Neural Networks
- URL: http://arxiv.org/abs/2306.03054v1
- Date: Mon, 5 Jun 2023 17:25:45 GMT
- Title: Discriminative Adversarial Privacy: Balancing Accuracy and Membership
Privacy in Neural Networks
- Authors: Eugenio Lomurno, Alberto Archetti, Francesca Ausonio, Matteo Matteucci
- Abstract summary: Discriminative Adversarial Privacy (DAP) is a learning technique designed to achieve a balance between model performance, speed, and privacy.
DAP relies on adversarial training based on a novel loss function able to minimise the prediction error while maximising the MIA's error.
In addition, we introduce a novel metric named Accuracy Over Privacy (AOP) to capture the performance-privacy trade-off.
- Score: 7.0895962209555465
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The remarkable proliferation of deep learning across various industries has
underscored the importance of data privacy and security in AI pipelines. As the
evolution of sophisticated Membership Inference Attacks (MIAs) threatens the
secrecy of individual-specific information used for training deep learning
models, Differential Privacy (DP) raises as one of the most utilized techniques
to protect models against malicious attacks. However, despite its proven
theoretical properties, DP can significantly hamper model performance and
increase training time, turning its use impractical in real-world scenarios.
Tackling this issue, we present Discriminative Adversarial Privacy (DAP), a
novel learning technique designed to address the limitations of DP by achieving
a balance between model performance, speed, and privacy. DAP relies on
adversarial training based on a novel loss function able to minimise the
prediction error while maximising the MIA's error. In addition, we introduce a
novel metric named Accuracy Over Privacy (AOP) to capture the
performance-privacy trade-off. Finally, to validate our claims, we compare DAP
with diverse DP scenarios, providing an analysis of the results from
performance, time, and privacy preservation perspectives.
Related papers
- DeMem: Privacy-Enhanced Robust Adversarial Learning via De-Memorization [2.473007680791641]
Adversarial robustness is essential for ensuring the trustworthiness of machine learning models in real-world applications.
Previous studies have shown that enhancing adversarial robustness through adversarial training increases vulnerability to privacy attacks.
We propose DeMem, which selectively targets high-risk samples, achieving a better balance between privacy protection and model robustness.
arXiv Detail & Related papers (2024-12-08T00:22:58Z) - Pseudo-Probability Unlearning: Towards Efficient and Privacy-Preserving Machine Unlearning [59.29849532966454]
We propose PseudoProbability Unlearning (PPU), a novel method that enables models to forget data to adhere to privacy-preserving manner.
Our method achieves over 20% improvements in forgetting error compared to the state-of-the-art.
arXiv Detail & Related papers (2024-11-04T21:27:06Z) - Reconciling AI Performance and Data Reconstruction Resilience for
Medical Imaging [52.578054703818125]
Artificial Intelligence (AI) models are vulnerable to information leakage of their training data, which can be highly sensitive.
Differential Privacy (DP) aims to circumvent these susceptibilities by setting a quantifiable privacy budget.
We show that using very large privacy budgets can render reconstruction attacks impossible, while drops in performance are negligible.
arXiv Detail & Related papers (2023-12-05T12:21:30Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - On the utility and protection of optimization with differential privacy
and classic regularization techniques [9.413131350284083]
We study the effectiveness of the differentially-private descent (DP-SGD) algorithm against standard optimization practices with regularization techniques.
We discuss differential privacy's flaws and limits and empirically demonstrate the often superior privacy-preserving properties of dropout and l2-regularization.
arXiv Detail & Related papers (2022-09-07T14:10:21Z) - Bounding Membership Inference [28.64031194463754]
We provide a tighter bound on the accuracy of any MI adversary when a training algorithm provides $epsilon$-DP.
Our scheme enables $epsilon$-DP users to employ looser DP guarantees when training their model to limit the success of any MI adversary.
arXiv Detail & Related papers (2022-02-24T17:54:15Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - Federated Deep Learning with Bayesian Privacy [28.99404058773532]
Federated learning (FL) aims to protect data privacy by cooperatively learning a model without sharing private data among users.
Homomorphic encryption (HE) based methods provide secure privacy protections but suffer from extremely high computational and communication overheads.
Deep learning with Differential Privacy (DP) was implemented as a practical learning algorithm at a manageable cost in complexity.
arXiv Detail & Related papers (2021-09-27T12:48:40Z) - Gradient Masking and the Underestimated Robustness Threats of
Differential Privacy in Deep Learning [0.0]
This paper experimentally evaluates the impact of training with Differential Privacy (DP) on model vulnerability against a broad range of adversarial attacks.
The results suggest that private models are less robust than their non-private counterparts, and that adversarial examples transfer better among DP models than between non-private and private ones.
arXiv Detail & Related papers (2021-05-17T16:10:54Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.