Chameleon: Increasing Label-Only Membership Leakage with Adaptive
Poisoning
- URL: http://arxiv.org/abs/2310.03838v2
- Date: Tue, 16 Jan 2024 21:06:25 GMT
- Title: Chameleon: Increasing Label-Only Membership Leakage with Adaptive
Poisoning
- Authors: Harsh Chaudhari, Giorgio Severi, Alina Oprea, Jonathan Ullman
- Abstract summary: Membership Inference (MI) attacks seek to determine whether a particular data sample was included in a model's training dataset.
We show that existing label-only MI attacks are ineffective at inferring membership in the low False Positive Rate regime.
We propose a new attack Chameleon that leverages a novel adaptive data poisoning strategy and an efficient query selection method.
- Score: 8.084254242380057
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The integration of machine learning (ML) in numerous critical applications
introduces a range of privacy concerns for individuals who provide their
datasets for model training. One such privacy risk is Membership Inference
(MI), in which an attacker seeks to determine whether a particular data sample
was included in the training dataset of a model. Current state-of-the-art MI
attacks capitalize on access to the model's predicted confidence scores to
successfully perform membership inference, and employ data poisoning to further
enhance their effectiveness. In this work, we focus on the less explored and
more realistic label-only setting, where the model provides only the predicted
label on a queried sample. We show that existing label-only MI attacks are
ineffective at inferring membership in the low False Positive Rate (FPR)
regime. To address this challenge, we propose a new attack Chameleon that
leverages a novel adaptive data poisoning strategy and an efficient query
selection method to achieve significantly more accurate membership inference
than existing label-only attacks, especially at low FPRs.
Related papers
- CLMIA: Membership Inference Attacks via Unsupervised Contrastive Learning [19.163930810721027]
Membership Inference Attacks (MIAs) exploit a feature to determine whether a data sample is used for training a machine learning model.
In this paper, we propose a new attack method called CLMIA, which uses unsupervised contrastive learning to train an attack model without using extra membership status information.
arXiv Detail & Related papers (2024-11-17T18:25:01Z) - A Method to Facilitate Membership Inference Attacks in Deep Learning Models [5.724311218570013]
We demonstrate a new form of membership inference attack that is strictly more powerful than prior art.
Our attack empowers the adversary to reliably de-identify all the training samples.
We show that the models can effectively disguise the amplified membership leakage under common membership privacy auditing.
arXiv Detail & Related papers (2024-07-02T03:33:42Z) - Can We Trust the Unlabeled Target Data? Towards Backdoor Attack and Defense on Model Adaptation [120.42853706967188]
We explore the potential backdoor attacks on model adaptation launched by well-designed poisoning target data.
We propose a plug-and-play method named MixAdapt, combining it with existing adaptation algorithms.
arXiv Detail & Related papers (2024-01-11T16:42:10Z) - Confidence Is All You Need for MI Attacks [7.743155804758186]
We propose a new method to gauge a data point's membership in a model's training set.
During training, the model is essentially being 'fit' to the training data and might face particular difficulties in generalization to unseen data.
arXiv Detail & Related papers (2023-11-26T18:09:24Z) - Unstoppable Attack: Label-Only Model Inversion via Conditional Diffusion
Model [14.834360664780709]
Model attacks (MIAs) aim to recover private data from inaccessible training sets of deep learning models.
This paper develops a novel MIA method, leveraging a conditional diffusion model (CDM) to recover representative samples under the target label.
Experimental results show that this method can generate similar and accurate samples to the target label, outperforming generators of previous approaches.
arXiv Detail & Related papers (2023-07-17T12:14:24Z) - Membership Inference Attacks against Synthetic Data through Overfitting
Detection [84.02632160692995]
We argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution.
We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
arXiv Detail & Related papers (2023-02-24T11:27:39Z) - Canary in a Coalmine: Better Membership Inference with Ensembled
Adversarial Queries [53.222218035435006]
We use adversarial tools to optimize for queries that are discriminative and diverse.
Our improvements achieve significantly more accurate membership inference than existing methods.
arXiv Detail & Related papers (2022-10-19T17:46:50Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - Label-Only Membership Inference Attacks [67.46072950620247]
We introduce label-only membership inference attacks.
Our attacks evaluate the robustness of a model's predicted labels under perturbations.
We find that training models with differential privacy and (strong) L2 regularization are the only known defense strategies.
arXiv Detail & Related papers (2020-07-28T15:44:31Z) - How Does Data Augmentation Affect Privacy in Machine Learning? [94.52721115660626]
We propose new MI attacks to utilize the information of augmented data.
We establish the optimal membership inference when the model is trained with augmented data.
arXiv Detail & Related papers (2020-07-21T02:21:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.