Cascading and Proxy Membership Inference Attacks
- URL: http://arxiv.org/abs/2507.21412v1
- Date: Tue, 29 Jul 2025 00:46:09 GMT
- Title: Cascading and Proxy Membership Inference Attacks
- Authors: Yuntao Du, Jiacheng Li, Yuetian Chen, Kaiyuan Zhang, Zhizhen Yuan, Hanshen Xiao, Bruno Ribeiro, Ninghui Li,
- Abstract summary: A Membership Inference Attack (MIA) assesses how much a trained machine learning model reveals about its training data.<n>We classify existing MIAs into adaptive or non-adaptive, depending on whether the adversary is allowed to train shadow models on membership queries.<n>In the adaptive setting, where the adversary can train shadow models after accessing query instances, we propose an attack-agnostic framework called Cascading Membership Inference Attack (CMIA)<n>In the non-adaptive setting, where the adversary is restricted to training shadow models before obtaining membership queries, we introduce Proxy Membership Inference Attack (PMIA)
- Score: 17.7796850086875
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A Membership Inference Attack (MIA) assesses how much a trained machine learning model reveals about its training data by determining whether specific query instances were included in the dataset. We classify existing MIAs into adaptive or non-adaptive, depending on whether the adversary is allowed to train shadow models on membership queries. In the adaptive setting, where the adversary can train shadow models after accessing query instances, we highlight the importance of exploiting membership dependencies between instances and propose an attack-agnostic framework called Cascading Membership Inference Attack (CMIA), which incorporates membership dependencies via conditional shadow training to boost membership inference performance. In the non-adaptive setting, where the adversary is restricted to training shadow models before obtaining membership queries, we introduce Proxy Membership Inference Attack (PMIA). PMIA employs a proxy selection strategy that identifies samples with similar behaviors to the query instance and uses their behaviors in shadow models to perform a membership posterior odds test for membership inference. We provide theoretical analyses for both attacks, and extensive experimental results demonstrate that CMIA and PMIA substantially outperform existing MIAs in both settings, particularly in the low false-positive regime, which is crucial for evaluating privacy risks.
Related papers
- Privacy Leaks by Adversaries: Adversarial Iterations for Membership Inference Attack [21.396030274654073]
We propose IMIA, a novel attack strategy that utilizes the process of generating adversarial samples to infer membership.<n>We conduct experiments across multiple models and datasets, and our results demonstrate that the number of iterations for generating an adversarial sample is a reliable feature for membership inference.
arXiv Detail & Related papers (2025-06-03T10:09:24Z) - Con-ReCall: Detecting Pre-training Data in LLMs via Contrastive Decoding [118.75567341513897]
Existing methods typically analyze target text in isolation or solely with non-member contexts.<n>We propose Con-ReCall, a novel approach that leverages the asymmetric distributional shifts induced by member and non-member contexts.
arXiv Detail & Related papers (2024-09-05T09:10:38Z) - ReCaLL: Membership Inference via Relative Conditional Log-Likelihoods [56.073335779595475]
We propose ReCaLL (Relative Conditional Log-Likelihood) to detect pretraining data by leveraging conditional language modeling capabilities.<n>Our empirical findings show that conditioning member data on non-member prefixes induces a larger decrease in log-likelihood compared to non-member data.<n>We conduct comprehensive experiments and show that ReCaLL achieves state-of-the-art performance on the WikiMIA dataset.
arXiv Detail & Related papers (2024-06-23T00:23:13Z) - Membership Information Leakage in Federated Contrastive Learning [7.822625013699216]
Federated Contrastive Learning (FCL) represents a burgeoning approach for learning from decentralized unlabeled data.
FCL is susceptible to privacy risks, such as membership information leakage, stemming from its distributed nature.
This study delves into the feasibility of executing a membership inference attack on FCL and proposes a robust attack methodology.
arXiv Detail & Related papers (2024-03-06T19:53:25Z) - Assessing Privacy Risks in Language Models: A Case Study on
Summarization Tasks [65.21536453075275]
We focus on the summarization task and investigate the membership inference (MI) attack.
We exploit text similarity and the model's resistance to document modifications as potential MI signals.
We discuss several safeguards for training summarization models to protect against MI attacks and discuss the inherent trade-off between privacy and utility.
arXiv Detail & Related papers (2023-10-20T05:44:39Z) - MIAShield: Defending Membership Inference Attacks via Preemptive
Exclusion of Members [9.301268830193072]
In membership inference attacks, an adversary observes the predictions of a model to determine whether a sample is part of the model's training data.
We propose MIAShield, a new MIA defense based on preemptive exclusion of member samples instead of masking the presence of a member.
We show that MIAShield effectively mitigates membership inference for a wide range of MIAs, achieves far better privacy-utility trade-off compared with state-of-the-art defenses, and remains resilient against an adaptive adversary.
arXiv Detail & Related papers (2022-03-02T07:53:21Z) - Membership Inference Attacks on Machine Learning: A Survey [6.468846906231666]
Membership inference attack aims to identify whether a data sample was used to train a machine learning model or not.
It can raise severe privacy risks as the membership can reveal an individual's sensitive information.
We present the first comprehensive survey of membership inference attacks.
arXiv Detail & Related papers (2021-03-14T06:10:47Z) - Sampling Attacks: Amplification of Membership Inference Attacks by
Repeated Queries [74.59376038272661]
We introduce sampling attack, a novel membership inference technique that unlike other standard membership adversaries is able to work under severe restriction of no access to scores of the victim model.
We show that a victim model that only publishes the labels is still susceptible to sampling attacks and the adversary can recover up to 100% of its performance.
For defense, we choose differential privacy in the form of gradient perturbation during the training of the victim model as well as output perturbation at prediction time.
arXiv Detail & Related papers (2020-09-01T12:54:54Z) - Membership Leakage in Label-Only Exposures [10.875144776014533]
We propose decision-based membership inference attacks against machine learning models.
In particular, we develop two types of decision-based attacks, namely transfer attack, and boundary attack.
We also present new insights on the success of membership inference based on quantitative and qualitative analysis.
arXiv Detail & Related papers (2020-07-30T15:27:55Z) - Adaptive Adversarial Logits Pairing [65.51670200266913]
An adversarial training solution Adversarial Logits Pairing (ALP) tends to rely on fewer high-contribution features compared with vulnerable ones.
Motivated by these observations, we design an Adaptive Adversarial Logits Pairing (AALP) solution by modifying the training process and training target of ALP.
AALP consists of an adaptive feature optimization module with Guided Dropout to systematically pursue fewer high-contribution features.
arXiv Detail & Related papers (2020-05-25T03:12:20Z) - Revisiting Membership Inference Under Realistic Assumptions [87.13552321332988]
We study membership inference in settings where some of the assumptions typically used in previous research are relaxed.
This setting is more realistic than the balanced prior setting typically considered by researchers.
We develop a new inference attack based on the intuition that inputs corresponding to training set members will be near a local minimum in the loss function.
arXiv Detail & Related papers (2020-05-21T20:17:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.