On The Vulnerability of Recurrent Neural Networks to Membership
Inference Attacks
- URL: http://arxiv.org/abs/2110.03054v1
- Date: Wed, 6 Oct 2021 20:20:35 GMT
- Title: On The Vulnerability of Recurrent Neural Networks to Membership
Inference Attacks
- Authors: Yunhao Yang, Parham Gohari and Ufuk Topcu
- Abstract summary: We study the privacy implications of deploying recurrent neural networks in machine learning.
We consider membership inference attacks (MIAs) in which an attacker aims to infer whether a given data record has been used in the training of a learning agent.
- Score: 20.59493611017851
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the privacy implications of deploying recurrent neural networks in
machine learning. We consider membership inference attacks (MIAs) in which an
attacker aims to infer whether a given data record has been used in the
training of a learning agent. Using existing MIAs that target feed-forward
neural networks, we empirically demonstrate that the attack accuracy wanes for
data records used earlier in the training history. Alternatively, recurrent
networks are specifically designed to better remember their past experience;
hence, they are likely to be more vulnerable to MIAs than their feed-forward
counterparts. We develop a pair of MIA layouts for two primary applications of
recurrent networks, namely, deep reinforcement learning and
sequence-to-sequence tasks. We use the first attack to provide empirical
evidence that recurrent networks are indeed more vulnerable to MIAs than
feed-forward networks with the same performance level. We use the second attack
to showcase the differences between the effects of overtraining recurrent and
feed-forward networks on the accuracy of their respective MIAs. Finally, we
deploy a differential privacy mechanism to resolve the privacy vulnerability
that the MIAs exploit. For both attack layouts, the privacy mechanism degrades
the attack accuracy from above 80% to 50%, which is equal to guessing the data
membership uniformly at random, while trading off less than 10% utility.
Related papers
- Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Backdoor Attacks in Peer-to-Peer Federated Learning [11.235386862864397]
Peer-to-Peer Federated Learning (P2PFL) offer advantages in terms of both privacy and reliability.
We propose new backdoor attacks for P2PFL that leverage structural graph properties to select the malicious nodes, and achieve high attack success.
arXiv Detail & Related papers (2023-01-23T21:49:28Z) - SPIN: Simulated Poisoning and Inversion Network for Federated
Learning-Based 6G Vehicular Networks [9.494669823390648]
Vehicular networks have always faced data privacy preservation concerns.
The technique is quite vulnerable to model inversion and model poisoning attacks.
We propose simulated poisoning and inversion network (SPIN) that leverages the optimization approach for reconstructing data.
arXiv Detail & Related papers (2022-11-21T10:07:13Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - NetSentry: A Deep Learning Approach to Detecting Incipient Large-scale
Network Attacks [9.194664029847019]
We show how to use Machine Learning for Network Intrusion Detection (NID) in a principled way.
We propose NetSentry, perhaps the first of its kind NIDS that builds on Bi-ALSTM, an original ensemble of sequential neural models.
We demonstrate F1 score gains above 33% over the state-of-the-art, as well as up to 3 times higher rates of detecting attacks such as XSS and web bruteforce.
arXiv Detail & Related papers (2022-02-20T17:41:02Z) - Do Not Trust Prediction Scores for Membership Inference Attacks [15.567057178736402]
Membership inference attacks (MIAs) aim to determine whether a specific sample was used to train a predictive model.
We argue that this is a fallacy for many modern deep network architectures.
We are able to produce a potentially infinite number of samples falsely classified as part of the training data.
arXiv Detail & Related papers (2021-11-17T12:39:04Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z) - How Does Data Augmentation Affect Privacy in Machine Learning? [94.52721115660626]
We propose new MI attacks to utilize the information of augmented data.
We establish the optimal membership inference when the model is trained with augmented data.
arXiv Detail & Related papers (2020-07-21T02:21:10Z) - Learning from Failure: Training Debiased Classifier from Biased
Classifier [76.52804102765931]
We show that neural networks learn to rely on spurious correlation only when it is "easier" to learn than the desired knowledge.
We propose a failure-based debiasing scheme by training a pair of neural networks simultaneously.
Our method significantly improves the training of the network against various types of biases in both synthetic and real-world datasets.
arXiv Detail & Related papers (2020-07-06T07:20:29Z) - Neural Networks and Value at Risk [59.85784504799224]
We perform Monte-Carlo simulations of asset returns for Value at Risk threshold estimation.
Using equity markets and long term bonds as test assets, we investigate neural networks.
We find our networks when fed with substantially less data to perform significantly worse.
arXiv Detail & Related papers (2020-05-04T17:41:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.