Winning the MIDST Challenge: New Membership Inference Attacks on Diffusion Models for Tabular Data Synthesis
- URL: http://arxiv.org/abs/2503.12008v1
- Date: Sat, 15 Mar 2025 06:13:27 GMT
- Title: Winning the MIDST Challenge: New Membership Inference Attacks on Diffusion Models for Tabular Data Synthesis
- Authors: Xiaoyu Wu, Yifei Pang, Terrance Liu, Steven Wu,
- Abstract summary: Existing privacy evaluations often rely on metrics or weak membership inference attacks (MIA)<n>In this work, we conduct a rigorous MIA study on diffusion-based synthesis, revealing that state-of-the-art attacks designed for image models fail in this setting.<n>Our method, implemented with a lightweight-driven approach, effectively learns membership signals, eliminating the need for manual optimization.
- Score: 10.682673935815547
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Tabular data synthesis using diffusion models has gained significant attention for its potential to balance data utility and privacy. However, existing privacy evaluations often rely on heuristic metrics or weak membership inference attacks (MIA), leaving privacy risks inadequately assessed. In this work, we conduct a rigorous MIA study on diffusion-based tabular synthesis, revealing that state-of-the-art attacks designed for image models fail in this setting. We identify noise initialization as a key factor influencing attack efficacy and propose a machine-learning-driven approach that leverages loss features across different noises and time steps. Our method, implemented with a lightweight MLP, effectively learns membership signals, eliminating the need for manual optimization. Experimental results from the MIDST Challenge @ SaTML 2025 demonstrate the effectiveness of our approach, securing first place across all tracks. Code is available at https://github.com/Nicholas0228/Tartan_Federer_MIDST.
Related papers
- How Breakable Is Privacy: Probing and Resisting Model Inversion Attacks in Collaborative Inference [9.092229145160763]
Collaborative inference improves computational efficiency for edge devices by transmitting intermediate features to cloud models.<n>There is no established criterion for assessing model inversion attacks (MIAs)<n>We propose SiftFunnel, a privacy-preserving framework to resist MIA while maintaining usability.
arXiv Detail & Related papers (2025-01-01T13:00:01Z) - Identify Backdoored Model in Federated Learning via Individual Unlearning [7.200910949076064]
Backdoor attacks present a significant threat to the robustness of Federated Learning (FL)
We propose MASA, a method that utilizes individual unlearning on local models to identify malicious models in FL.
To the best of our knowledge, this is the first work to leverage machine unlearning to identify malicious models in FL.
arXiv Detail & Related papers (2024-11-01T21:19:47Z) - Transferable Adversarial Attacks on SAM and Its Downstream Models [87.23908485521439]
This paper explores the feasibility of adversarial attacking various downstream models fine-tuned from the segment anything model (SAM)
To enhance the effectiveness of the adversarial attack towards models fine-tuned on unknown datasets, we propose a universal meta-initialization (UMI) algorithm.
arXiv Detail & Related papers (2024-10-26T15:04:04Z) - Advancing the Robustness of Large Language Models through Self-Denoised Smoothing [50.54276872204319]
Large language models (LLMs) have achieved significant success, but their vulnerability to adversarial perturbations has raised considerable concerns.
We propose to leverage the multitasking nature of LLMs to first denoise the noisy inputs and then to make predictions based on these denoised versions.
Unlike previous denoised smoothing techniques in computer vision, which require training a separate model to enhance the robustness of LLMs, our method offers significantly better efficiency and flexibility.
arXiv Detail & Related papers (2024-04-18T15:47:00Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - Privacy Threats in Stable Diffusion Models [0.7366405857677227]
This paper introduces a novel approach to membership inference attacks (MIA) targeting stable diffusion computer vision models.
MIAs aim to extract sensitive information about a model's training data, posing significant privacy concerns.
We devise a black-box MIA that only needs to query the victim model repeatedly.
arXiv Detail & Related papers (2023-11-15T20:31:40Z) - A Discrepancy Aware Framework for Robust Anomaly Detection [51.710249807397695]
We present a Discrepancy Aware Framework (DAF), which demonstrates robust performance consistently with simple and cheap strategies.
Our method leverages an appearance-agnostic cue to guide the decoder in identifying defects, thereby alleviating its reliance on synthetic appearance.
Under the simple synthesis strategies, it outperforms existing methods by a large margin. Furthermore, it also achieves the state-of-the-art localization performance.
arXiv Detail & Related papers (2023-10-11T15:21:40Z) - Feature Matching Data Synthesis for Non-IID Federated Learning [7.740333805796447]
Federated learning (FL) trains neural networks on edge devices without collecting data at a central server.
This paper proposes a hard feature matching data synthesis (HFMDS) method to share auxiliary data besides local models.
For better privacy preservation, we propose a hard feature augmentation method to transfer real features towards the decision boundary.
arXiv Detail & Related papers (2023-08-09T07:49:39Z) - An Efficient Membership Inference Attack for the Diffusion Model by
Proximal Initialization [58.88327181933151]
In this paper, we propose an efficient query-based membership inference attack (MIA)
Experimental results indicate that the proposed method can achieve competitive performance with only two queries on both discrete-time and continuous-time diffusion models.
To the best of our knowledge, this work is the first to study the robustness of diffusion models to MIA in the text-to-speech task.
arXiv Detail & Related papers (2023-05-26T16:38:48Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Concealing Sensitive Samples against Gradient Leakage in Federated
Learning [41.43099791763444]
Federated Learning (FL) is a distributed learning paradigm that enhances users privacy by eliminating the need for clients to share raw, private data with the server.
Recent studies expose the vulnerability of FL to model inversion attacks, where adversaries reconstruct users private data via eavesdropping on the shared gradient information.
We present a simple, yet effective defense strategy that obfuscates the gradients of the sensitive data with concealed samples.
arXiv Detail & Related papers (2022-09-13T04:19:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.