Evaluating Membership Inference Through Adversarial Robustness
- URL: http://arxiv.org/abs/2205.06986v1
- Date: Sat, 14 May 2022 06:48:47 GMT
- Title: Evaluating Membership Inference Through Adversarial Robustness
- Authors: Zhaoxi Zhang and Leo Yu Zhang and Xufei Zheng and Bilal Hussain Abbasi
and Shengshan Hu
- Abstract summary: We propose an enhanced methodology for membership inference attacks based on adversarial robustness.
We evaluate our proposed method on three datasets: Fashion-MNIST, CIFAR-10, and CIFAR-100.
- Score: 6.983991370116041
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The usage of deep learning is being escalated in many applications. Due to
its outstanding performance, it is being used in a variety of security and
privacy-sensitive areas in addition to conventional applications. One of the
key aspects of deep learning efficacy is to have abundant data. This trait
leads to the usage of data which can be highly sensitive and private, which in
turn causes wariness with regard to deep learning in the general public.
Membership inference attacks are considered lethal as they can be used to
figure out whether a piece of data belongs to the training dataset or not. This
can be problematic with regards to leakage of training data information and its
characteristics. To highlight the significance of these types of attacks, we
propose an enhanced methodology for membership inference attacks based on
adversarial robustness, by adjusting the directions of adversarial
perturbations through label smoothing under a white-box setting. We evaluate
our proposed method on three datasets: Fashion-MNIST, CIFAR-10, and CIFAR-100.
Our experimental results reveal that the performance of our method surpasses
that of the existing adversarial robustness-based method when attacking
normally trained models. Additionally, through comparing our technique with the
state-of-the-art metric-based membership inference methods, our proposed method
also shows better performance when attacking adversarially trained models. The
code for reproducing the results of this work is available at
\url{https://github.com/plll4zzx/Evaluating-Membership-Inference-Through-Adversarial-Robustness}.
Related papers
- A practical approach to evaluating the adversarial distance for machine learning classifiers [2.2120851074630177]
This paper investigates the estimation of the more informative adversarial distance using iterative adversarial attacks and a certification approach.
We find that our adversarial attack approach is effective compared to related implementations, while the certification method falls short of expectations.
arXiv Detail & Related papers (2024-09-05T14:57:01Z) - Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - Class Incremental Learning for Adversarial Robustness [17.06592851567578]
Adrial training integrates adversarial examples during model training to enhance robustness.
We observe that combining incremental learning with naive adversarial training easily leads to a loss of robustness.
We propose the Flatness Preserving Distillation (FPD) loss that leverages the output difference between adversarial and clean examples.
arXiv Detail & Related papers (2023-12-06T04:38:02Z) - Confidence Is All You Need for MI Attacks [7.743155804758186]
We propose a new method to gauge a data point's membership in a model's training set.
During training, the model is essentially being 'fit' to the training data and might face particular difficulties in generalization to unseen data.
arXiv Detail & Related papers (2023-11-26T18:09:24Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Enhanced Membership Inference Attacks against Machine Learning Models [9.26208227402571]
Membership inference attacks are used to quantify the private information that a model leaks about the individual data points in its training set.
We derive new attack algorithms that can achieve a high AUC score while also highlighting the different factors that affect their performance.
Our algorithms capture a very precise approximation of privacy loss in models, and can be used as a tool to perform an accurate and informed estimation of privacy risk in machine learning models.
arXiv Detail & Related papers (2021-11-18T13:31:22Z) - Delving into Data: Effectively Substitute Training for Black-box Attack [84.85798059317963]
We propose a novel perspective substitute training that focuses on designing the distribution of data used in the knowledge stealing process.
The combination of these two modules can further boost the consistency of the substitute model and target model, which greatly improves the effectiveness of adversarial attack.
arXiv Detail & Related papers (2021-04-26T07:26:29Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.