Practical Blind Membership Inference Attack via Differential Comparisons
- URL: http://arxiv.org/abs/2101.01341v2
- Date: Thu, 7 Jan 2021 02:24:04 GMT
- Title: Practical Blind Membership Inference Attack via Differential Comparisons
- Authors: Bo Hui, Yuchen Yang, Haolin Yuan, Philippe Burlina, Neil Zhenqiang
Gong and Yinzhi Cao
- Abstract summary: Membership inference (MI) attacks affect user privacy by inferring whether given data samples have been used to train a target learning model.
BlindMI probes the target model and extracts membership semantics via a novel approach, called differential comparison.
BlindMI was evaluated by comparing it with state-of-the-art MI attack algorithms.
- Score: 22.582872789369752
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Membership inference (MI) attacks affect user privacy by inferring whether
given data samples have been used to train a target learning model, e.g., a
deep neural network. There are two types of MI attacks in the literature, i.e.,
these with and without shadow models. The success of the former heavily depends
on the quality of the shadow model, i.e., the transferability between the
shadow and the target; the latter, given only blackbox probing access to the
target model, cannot make an effective inference of unknowns, compared with MI
attacks using shadow models, due to the insufficient number of qualified
samples labeled with ground truth membership information.
In this paper, we propose an MI attack, called BlindMI, which probes the
target model and extracts membership semantics via a novel approach, called
differential comparison. The high-level idea is that BlindMI first generates a
dataset with nonmembers via transforming existing samples into new samples, and
then differentially moves samples from a target dataset to the generated,
non-member set in an iterative manner. If the differential move of a sample
increases the set distance, BlindMI considers the sample as non-member and vice
versa.
BlindMI was evaluated by comparing it with state-of-the-art MI attack
algorithms. Our evaluation shows that BlindMI improves F1-score by nearly 20%
when compared to state-of-the-art on some datasets, such as Purchase-50 and
Birds-200, in the blind setting where the adversary does not know the target
model's architecture and the target dataset's ground truth labels. We also show
that BlindMI can defeat state-of-the-art defenses.
Related papers
- Why Train More? Effective and Efficient Membership Inference via
Memorization [34.13594460560715]
Membership Inference Attacks aim to identify specific data samples within the private training dataset of machine learning models.
By strategically choosing the samples, MI adversaries can maximize their attack success while minimizing the number of shadow models.
arXiv Detail & Related papers (2023-10-12T03:29:53Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - Membership Inference Attacks against Synthetic Data through Overfitting
Detection [84.02632160692995]
We argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution.
We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
arXiv Detail & Related papers (2023-02-24T11:27:39Z) - Pseudo Label-Guided Model Inversion Attack via Conditional Generative
Adversarial Network [102.21368201494909]
Model inversion (MI) attacks have raised increasing concerns about privacy.
Recent MI attacks leverage a generative adversarial network (GAN) as an image prior to narrow the search space.
We propose Pseudo Label-Guided MI (PLG-MI) attack via conditional GAN (cGAN)
arXiv Detail & Related papers (2023-02-20T07:29:34Z) - l-Leaks: Membership Inference Attacks with Logits [5.663757165885866]
We present attacks based on black-box access to the target model. We name our attack textbfl-Leaks.
We build the shadow model by learning the logits of the target model and making the shadow model more similar to the target model. Then shadow model will have sufficient confidence in the member samples of the target model.
arXiv Detail & Related papers (2022-05-13T06:59:09Z) - An Efficient Subpopulation-based Membership Inference Attack [11.172550334631921]
We introduce a fundamentally different MI attack approach which obviates the need to train hundreds of shadow models.
We achieve the state-of-the-art membership inference accuracy while significantly reducing the training cost.
arXiv Detail & Related papers (2022-03-04T00:52:06Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z) - Two Sides of the Same Coin: White-box and Black-box Attacks for Transfer
Learning [60.784641458579124]
We show that fine-tuning effectively enhances model robustness under white-box FGSM attacks.
We also propose a black-box attack method for transfer learning models which attacks the target model with the adversarial examples produced by its source model.
To systematically measure the effect of both white-box and black-box attacks, we propose a new metric to evaluate how transferable are the adversarial examples produced by a source model to a target model.
arXiv Detail & Related papers (2020-08-25T15:04:32Z) - How Does Data Augmentation Affect Privacy in Machine Learning? [94.52721115660626]
We propose new MI attacks to utilize the information of augmented data.
We establish the optimal membership inference when the model is trained with augmented data.
arXiv Detail & Related papers (2020-07-21T02:21:10Z) - On the Difficulty of Membership Inference Attacks [11.172550334631921]
Recent studies propose membership inference (MI) attacks on deep models.
Despite their apparent success, these studies only report accuracy, precision, and recall of the positive class (member class)
We show that the way the MI attack performance has been reported is often misleading because they suffer from high false positive rate or false alarm rate (FAR) that has not been reported.
arXiv Detail & Related papers (2020-05-27T23:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.