User-Level Membership Inference Attack against Metric Embedding Learning
- URL: http://arxiv.org/abs/2203.02077v1
- Date: Fri, 4 Mar 2022 00:49:42 GMT
- Title: User-Level Membership Inference Attack against Metric Embedding Learning
- Authors: Guoyao Li, Shahbaz Rezaei, and Xin Liu
- Abstract summary: Membership inference (MI) determines if a sample was part of a victim model training set.
In this paper, we develop a user-level MI attack where the goal is to find if any sample from the target user has been used during training.
- Score: 8.414720636874106
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Membership inference (MI) determines if a sample was part of a victim model
training set. Recent development of MI attacks focus on record-level membership
inference which limits their application in many real-world scenarios. For
example, in the person re-identification task, the attacker (or investigator)
is interested in determining if a user's images have been used during training
or not. However, the exact training images might not be accessible to the
attacker. In this paper, we develop a user-level MI attack where the goal is to
find if any sample from the target user has been used during training even when
no exact training sample is available to the attacker. We focus on metric
embedding learning due to its dominance in person re-identification, where
user-level MI attack is more sensible. We conduct an extensive evaluation on
several datasets and show that our approach achieves high accuracy on
user-level MI task.
Related papers
- Blind Baselines Beat Membership Inference Attacks for Foundation Models [24.010279957557252]
Membership inference (MI) attacks try to determine if a data sample was used to train a machine learning model.
For foundation models trained on unknown Web data, MI attacks can be used to detect copyrighted training materials, measure test set contamination, or audit machine unlearning.
We show that evaluations of MI attacks for foundation models are flawed, because they sample members and non-members from different distributions.
arXiv Detail & Related papers (2024-06-23T19:40:11Z) - Do Membership Inference Attacks Work on Large Language Models? [141.2019867466968]
Membership inference attacks (MIAs) attempt to predict whether a particular datapoint is a member of a target model's training data.
We perform a large-scale evaluation of MIAs over a suite of language models trained on the Pile, ranging from 160M to 12B parameters.
We find that MIAs barely outperform random guessing for most settings across varying LLM sizes and domains.
arXiv Detail & Related papers (2024-02-12T17:52:05Z) - PRAT: PRofiling Adversarial aTtacks [52.693011665938734]
We introduce a novel problem of PRofiling Adversarial aTtacks (PRAT)
Given an adversarial example, the objective of PRAT is to identify the attack used to generate it.
We use AID to devise a novel framework for the PRAT objective.
arXiv Detail & Related papers (2023-09-20T07:42:51Z) - Pseudo Label-Guided Model Inversion Attack via Conditional Generative
Adversarial Network [102.21368201494909]
Model inversion (MI) attacks have raised increasing concerns about privacy.
Recent MI attacks leverage a generative adversarial network (GAN) as an image prior to narrow the search space.
We propose Pseudo Label-Guided MI (PLG-MI) attack via conditional GAN (cGAN)
arXiv Detail & Related papers (2023-02-20T07:29:34Z) - On the Discredibility of Membership Inference Attacks [11.172550334631921]
Membership inference attacks are proposed to determine if a sample was part of the training set or not.
We show that MI models frequently misclassify neighboring nonmember samples of a member sample as members.
We argue that current membership inference attacks can identify memorized subpopulations, but they cannot reliably identify which exact sample in the subpopulation was used during the training.
arXiv Detail & Related papers (2022-12-06T01:48:27Z) - Membership Inference Attack Using Self Influence Functions [43.10140199124212]
Member inference (MI) attacks aim to determine if a specific data sample was used to train a machine learning model.
We present a novel MI attack for it that employs influence functions, or more specifically the samples' self-influence scores, to perform the MI prediction.
Our attack method achieves new state-of-the-art results for both training with and without data augmentations.
arXiv Detail & Related papers (2022-05-26T23:52:26Z) - Identifying a Training-Set Attack's Target Using Renormalized Influence
Estimation [11.663072799764542]
This work proposes the task of target identification, which determines whether a specific test instance is the target of a training-set attack.
Rather than focusing on a single attack method or data modality, we build on influence estimation, which quantifies each training instance's contribution to a model's prediction.
arXiv Detail & Related papers (2022-01-25T02:36:34Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - How Does Data Augmentation Affect Privacy in Machine Learning? [94.52721115660626]
We propose new MI attacks to utilize the information of augmented data.
We establish the optimal membership inference when the model is trained with augmented data.
arXiv Detail & Related papers (2020-07-21T02:21:10Z) - On the Difficulty of Membership Inference Attacks [11.172550334631921]
Recent studies propose membership inference (MI) attacks on deep models.
Despite their apparent success, these studies only report accuracy, precision, and recall of the positive class (member class)
We show that the way the MI attack performance has been reported is often misleading because they suffer from high false positive rate or false alarm rate (FAR) that has not been reported.
arXiv Detail & Related papers (2020-05-27T23:09:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.