How Does a Deep Learning Model Architecture Impact Its Privacy? A
Comprehensive Study of Privacy Attacks on CNNs and Transformers
- URL: http://arxiv.org/abs/2210.11049v3
- Date: Fri, 2 Feb 2024 08:11:13 GMT
- Title: How Does a Deep Learning Model Architecture Impact Its Privacy? A
Comprehensive Study of Privacy Attacks on CNNs and Transformers
- Authors: Guangsheng Zhang, Bo Liu, Huan Tian, Tianqing Zhu, Ming Ding, Wanlei
Zhou
- Abstract summary: Privacy concerns arise due to the potential leakage of sensitive information from the training data.
Recent research has revealed that deep learning models are vulnerable to various privacy attacks.
- Score: 18.27174440444256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As a booming research area in the past decade, deep learning technologies
have been driven by big data collected and processed on an unprecedented scale.
However, privacy concerns arise due to the potential leakage of sensitive
information from the training data. Recent research has revealed that deep
learning models are vulnerable to various privacy attacks, including membership
inference attacks, attribute inference attacks, and gradient inversion attacks.
Notably, the efficacy of these attacks varies from model to model. In this
paper, we answer a fundamental question: Does model architecture affect model
privacy? By investigating representative model architectures from convolutional
neural networks (CNNs) to Transformers, we demonstrate that Transformers
generally exhibit higher vulnerability to privacy attacks than CNNs.
Additionally, we identify the micro design of activation layers, stem layers,
and LN layers, as major factors contributing to the resilience of CNNs against
privacy attacks, while the presence of attention modules is another main factor
that exacerbates the privacy vulnerability of Transformers. Our discovery
reveals valuable insights for deep learning models to defend against privacy
attacks and inspires the research community to develop privacy-friendly model
architectures.
Related papers
- Privacy Preserving Properties of Vision Classifiers [3.004632712148892]
We evaluate the privacy-preserving properties of vision classifiers across diverse architectures.
Our analysis highlights how architectural differences, such as input representation, feature extraction mechanisms, and weight structures, influence privacy risks.
Our findings provide actionable insights into the design of secure and privacy-aware machine learning systems.
arXiv Detail & Related papers (2025-02-02T11:50:00Z) - Model Inversion Attacks: A Survey of Approaches and Countermeasures [59.986922963781]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.
Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.
This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - New Emerged Security and Privacy of Pre-trained Model: a Survey and Outlook [54.24701201956833]
Security and privacy issues have undermined users' confidence in pre-trained models.
Current literature lacks a clear taxonomy of emerging attacks and defenses for pre-trained models.
This taxonomy categorizes attacks and defenses into No-Change, Input-Change, and Model-Change approaches.
arXiv Detail & Related papers (2024-11-12T10:15:33Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - BrainLeaks: On the Privacy-Preserving Properties of Neuromorphic Architectures against Model Inversion Attacks [3.4673556247932225]
Conventional artificial neural networks (ANNs) have been found vulnerable to several attacks that can leak sensitive data.
Our study is motivated by the intuition that the non-differentiable aspect of spiking neural networks (SNNs) might result in inherent privacy-preserving properties.
We develop novel inversion attack strategies that are comprehensively designed to target SNNs.
arXiv Detail & Related papers (2024-02-01T03:16:40Z) - Beyond Gradient and Priors in Privacy Attacks: Leveraging Pooler Layer Inputs of Language Models in Federated Learning [24.059033969435973]
This paper presents a two-stage privacy attack strategy that targets the vulnerabilities in the architecture of contemporary language models.
Our comparative experiments demonstrate superior attack performance across various datasets and scenarios.
We call for the community to recognize and address these potential privacy risks in designing large language models.
arXiv Detail & Related papers (2023-12-10T01:19:59Z) - Security and Privacy Challenges in Deep Learning Models [0.0]
Deep learning models can be subjected to various attacks that compromise model security and data privacy.
Model Extraction Attacks, Model Inversion attacks, and Adversarial attacks are discussed.
Data Poisoning Attacks add harmful data to the training set, disrupting the learning process and reducing the reliability of the deep learning mode.
arXiv Detail & Related papers (2023-11-23T00:26:14Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Privacy and Robustness in Federated Learning: Attacks and Defenses [74.62641494122988]
We conduct the first comprehensive survey on this topic.
Through a concise introduction to the concept of FL, and a unique taxonomy covering: 1) threat models; 2) poisoning attacks and defenses against robustness; 3) inference attacks and defenses against privacy, we provide an accessible review of this important topic.
arXiv Detail & Related papers (2020-12-07T12:11:45Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.