SoK: Challenges in Tabular Membership Inference Attacks
- URL: http://arxiv.org/abs/2601.15874v1
- Date: Thu, 22 Jan 2026 11:30:11 GMT
- Title: SoK: Challenges in Tabular Membership Inference Attacks
- Authors: Cristina Pêra, Tânia Carvalho, Maxime Cordy, Luís Antunes,
- Abstract summary: Membership Inference Attacks (MIAs) are a dominant approach for evaluating privacy in machine learning applications.<n>In this paper, we provide an extensive review and analysis of MIAs considering two main learning paradigms: centralized and federated learning.<n>We show that even attacks with limited attack performance can still successfully expose a large portion of single-outs.
- Score: 10.848042721721491
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Membership Inference Attacks (MIAs) are currently a dominant approach for evaluating privacy in machine learning applications. Despite their significance in identifying records belonging to the training dataset, several concerns remain unexplored, particularly with regard to tabular data. In this paper, first, we provide an extensive review and analysis of MIAs considering two main learning paradigms: centralized and federated learning. We extend and refine the taxonomy for both. Second, we demonstrate the efficacy of MIAs in tabular data using several attack strategies, also including defenses. Furthermore, in a federated learning scenario, we consider the threat posed by an outsider adversary, which is often neglected. Third, we demonstrate the high vulnerability of single-outs (records with a unique signature) to MIAs. Lastly, we explore how MIAs transfer across model architectures. Our results point towards a general poor performance of these attacks in tabular data which contrasts with previous state-of-the-art. Notably, even attacks with limited attack performance can still successfully expose a large portion of single-outs. Moreover, our findings suggest that using different surrogate models makes MIAs more effective.
Related papers
- Debiased Dual-Invariant Defense for Adversarially Robust Person Re-Identification [52.63017280231648]
Person re-identification (ReID) is a fundamental task in many real-world applications such as pedestrian trajectory tracking.<n>Person ReID models are highly susceptible to adversarial attacks, where imperceptible perturbations to pedestrian images can cause entirely incorrect predictions.<n>We propose a dual-invariant defense framework composed of two main phases.
arXiv Detail & Related papers (2025-11-13T03:56:40Z) - Empirical Comparison of Membership Inference Attacks in Deep Transfer Learning [4.877819365490361]
Membership inference attacks (MIAs) provide an empirical estimate of the privacy leakage by machine learning models.<n>We compare performance of diverse MIAs in transfer learning settings to help practitioners identify the most efficient attacks for privacy risk evaluation.
arXiv Detail & Related papers (2025-10-07T10:21:05Z) - A Systematic Survey of Model Extraction Attacks and Defenses: State-of-the-Art and Perspectives [65.3369988566853]
Recent studies have demonstrated that adversaries can replicate a target model's functionality.<n>Model Extraction Attacks pose threats to intellectual property, privacy, and system security.<n>We propose a novel taxonomy that classifies MEAs according to attack mechanisms, defense approaches, and computing environments.
arXiv Detail & Related papers (2025-08-20T19:49:59Z) - Membership Inference Attacks as Privacy Tools: Reliability, Disparity and Ensemble [12.451650669707167]
Membership inference attacks (MIAs) pose a significant threat to the privacy of machine learning models.<n>Prior MIA research has primarily focused on performance metrics such as AUC, accuracy, and TPR@low FPR.<n>These disparities have crucial implications for the reliability and completeness of MIAs as privacy evaluation tools.
arXiv Detail & Related papers (2025-06-16T20:22:07Z) - Revisiting Model Inversion Evaluation: From Misleading Standards to Reliable Privacy Assessment [63.07424521895492]
Model Inversion (MI) attacks aim to reconstruct information from private training data by exploiting access to machine learning models T.<n>The standard evaluation framework for such attacks relies on an evaluation model E, trained under the same task design as T.<n>This framework has become the de facto standard for assessing progress in MI research, used across nearly all recent MI attacks and defenses without question.
arXiv Detail & Related papers (2025-05-06T13:32:12Z) - Membership Inference Attacks on Large-Scale Models: A Survey [5.795582095405318]
Membership Inference Attacks (MIAs) are an important technique for exposing or assessing privacy risks.<n>We provide the first comprehensive review of MIAs targeting Large Language Models (LLMs) and Large Multimodal Models (LMMs)<n>Unlike prior surveys, we further examine MIAs across multiple stages of the model pipeline, including pre-training, fine-tuning, alignment, and Retrieval-Augmented Generation (RAG)
arXiv Detail & Related papers (2025-03-25T04:11:47Z) - Model Inversion Attacks: A Survey of Approaches and Countermeasures [69.18069679327263]
Recently, a new type of privacy attack, the model inversion attacks (MIAs), aims to extract sensitive features of private data for training.<n>Despite the significance, there is a lack of systematic studies that provide a comprehensive overview and deeper insights into MIAs.<n>This survey aims to summarize up-to-date MIA methods in both attacks and defenses.
arXiv Detail & Related papers (2024-11-15T08:09:28Z) - FedMIA: An Effective Membership Inference Attack Exploiting "All for One" Principle in Federated Learning [17.141646895576145]
Federated Learning (FL) is a promising approach for training machine learning models on decentralized data.<n>Membership Inference Attacks (MIAs) aim to determine whether a specific data point belongs to a target client's training set.<n>We introduce a three-step Membership Inference Attack (MIA) method, called FedMIA, which follows the "all for one"--leveraging updates from all clients across multiple communication rounds to enhance MIA effectiveness.
arXiv Detail & Related papers (2024-02-09T09:58:35Z) - Practical Membership Inference Attacks Against Large-Scale Multi-Modal
Models: A Pilot Study [17.421886085918608]
Membership inference attacks (MIAs) aim to infer whether a data point has been used to train a machine learning model.
These attacks can be employed to identify potential privacy vulnerabilities and detect unauthorized use of personal data.
This paper takes a first step towards developing practical MIAs against large-scale multi-modal models.
arXiv Detail & Related papers (2023-09-29T19:38:40Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - RelaxLoss: Defending Membership Inference Attacks without Losing Utility [68.48117818874155]
We propose a novel training framework based on a relaxed loss with a more achievable learning target.
RelaxLoss is applicable to any classification model with added benefits of easy implementation and negligible overhead.
Our approach consistently outperforms state-of-the-art defense mechanisms in terms of resilience against MIAs.
arXiv Detail & Related papers (2022-07-12T19:34:47Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - On the Effectiveness of Regularization Against Membership Inference
Attacks [26.137849584503222]
Deep learning models often raise privacy concerns as they leak information about their training data.
This enables an adversary to determine whether a data point was in a model's training set by conducting a membership inference attack (MIA)
While many regularization mechanisms exist, their effectiveness against MIAs has not been studied systematically.
arXiv Detail & Related papers (2020-06-09T15:17:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.