On the Vulnerability of Data Points under Multiple Membership Inference
Attacks and Target Models
- URL: http://arxiv.org/abs/2210.16258v1
- Date: Fri, 28 Oct 2022 16:50:21 GMT
- Title: On the Vulnerability of Data Points under Multiple Membership Inference
Attacks and Target Models
- Authors: Mauro Conti, Jiaxin Li, and Stjepan Picek
- Abstract summary: Membership Inference Attacks (MIAs) infer whether a data point is in the training data of a machine learning model.
This paper defines new metrics that can reflect the actual situation of data points' vulnerability.
We implement 54 MIAs, whose average attack accuracy ranges from 0.5 to 0.9, to support our analysis with our scalable and flexible platform.
- Score: 30.697733159196044
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Membership Inference Attacks (MIAs) infer whether a data point is in the
training data of a machine learning model. It is a threat while being in the
training data is private information of a data point. MIA correctly infers some
data points as members or non-members of the training data. Intuitively, data
points that MIA accurately detects are vulnerable. Considering those data
points may exist in different target models susceptible to multiple MIAs, the
vulnerability of data points under multiple MIAs and target models is worth
exploring.
This paper defines new metrics that can reflect the actual situation of data
points' vulnerability and capture vulnerable data points under multiple MIAs
and target models. From the analysis, MIA has an inference tendency to some
data points despite a low overall inference performance. Additionally, we
implement 54 MIAs, whose average attack accuracy ranges from 0.5 to 0.9, to
support our analysis with our scalable and flexible platform, Membership
Inference Attacks Platform (VMIAP). Furthermore, previous methods are
unsuitable for finding vulnerable data points under multiple MIAs and different
target models. Finally, we observe that the vulnerability is not characteristic
of the data point but related to the MIA and target model.
Related papers
- Subject Data Auditing via Source Inference Attack in Cross-Silo Federated Learning [23.205866835083455]
Source Inference Attack (SIA) in Federated Learning (FL) aims to identify which client used a target data point for local model training.
Subject Membership Inference Attack (SMIA) attempts to infer whether any client utilize data points from a target subject in cross-silo FL.
We propose a Subject-Level Source Inference Attack (SLSIA) by removing critical constraints that only one client can use a target data point in SIA.
arXiv Detail & Related papers (2024-09-28T17:27:34Z) - Range Membership Inference Attacks [17.28638946021444]
We introduce the class of range membership inference attacks (RaMIAs), testing if the model was trained on any data in a specified range.
We show that RaMIAs can capture privacy loss more accurately and comprehensively than MIAs on various types of data.
arXiv Detail & Related papers (2024-08-09T15:39:06Z) - Evaluating Membership Inference Attacks and Defenses in Federated
Learning [23.080346952364884]
Membership Inference Attacks (MIAs) pose a growing threat to privacy preservation in federated learning.
This paper conducts an evaluation of existing MIAs and corresponding defense strategies.
arXiv Detail & Related papers (2024-02-09T09:58:35Z) - MIA-BAD: An Approach for Enhancing Membership Inference Attack and its
Mitigation with Federated Learning [6.510488168434277]
The membership inference attack (MIA) is a popular paradigm for compromising the privacy of a machine learning (ML) model.
We propose an enhanced Membership Inference Attack with the Batch-wise generated Attack dataset (MIA-BAD)
We show how training an ML model through FL, has some distinct advantages and investigate how the threat introduced with the proposed MIA-BAD approach can be mitigated with FL approaches.
arXiv Detail & Related papers (2023-11-28T06:51:26Z) - Assessing Privacy Risks in Language Models: A Case Study on
Summarization Tasks [65.21536453075275]
We focus on the summarization task and investigate the membership inference (MI) attack.
We exploit text similarity and the model's resistance to document modifications as potential MI signals.
We discuss several safeguards for training summarization models to protect against MI attacks and discuss the inherent trade-off between privacy and utility.
arXiv Detail & Related papers (2023-10-20T05:44:39Z) - Practical Membership Inference Attacks Against Large-Scale Multi-Modal
Models: A Pilot Study [17.421886085918608]
Membership inference attacks (MIAs) aim to infer whether a data point has been used to train a machine learning model.
These attacks can be employed to identify potential privacy vulnerabilities and detect unauthorized use of personal data.
This paper takes a first step towards developing practical MIAs against large-scale multi-modal models.
arXiv Detail & Related papers (2023-09-29T19:38:40Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Membership Inference Attacks against Synthetic Data through Overfitting
Detection [84.02632160692995]
We argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution.
We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
arXiv Detail & Related papers (2023-02-24T11:27:39Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z) - How Does Data Augmentation Affect Privacy in Machine Learning? [94.52721115660626]
We propose new MI attacks to utilize the information of augmented data.
We establish the optimal membership inference when the model is trained with augmented data.
arXiv Detail & Related papers (2020-07-21T02:21:10Z) - Stance Detection Benchmark: How Robust Is Your Stance Detection? [65.91772010586605]
Stance Detection (StD) aims to detect an author's stance towards a certain topic or claim.
We introduce a StD benchmark that learns from ten StD datasets of various domains in a multi-dataset learning setting.
Within this benchmark setup, we are able to present new state-of-the-art results on five of the datasets.
arXiv Detail & Related papers (2020-01-06T13:37:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.