Knowledge-Enriched Distributional Model Inversion Attacks
- URL: http://arxiv.org/abs/2010.04092v4
- Date: Thu, 19 Aug 2021 15:19:34 GMT
- Title: Knowledge-Enriched Distributional Model Inversion Attacks
- Authors: Si Chen, Mostafa Kahla, Ruoxi Jia and Guo-Jun Qi
- Abstract summary: Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
- Score: 49.43828150561947
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model inversion (MI) attacks are aimed at reconstructing training data from
model parameters. Such attacks have triggered increasing concerns about
privacy, especially given a growing number of online model repositories.
However, existing MI attacks against deep neural networks (DNNs) have large
room for performance improvement. We present a novel inversion-specific GAN
that can better distill knowledge useful for performing attacks on private
models from public data. In particular, we train the discriminator to
differentiate not only the real and fake samples but the soft-labels provided
by the target model. Moreover, unlike previous work that directly searches for
a single data point to represent a target class, we propose to model a private
data distribution for each target class. Our experiments show that the
combination of these techniques can significantly boost the success rate of the
state-of-the-art MI attacks by 150%, and generalize better to a variety of
datasets and models. Our code is available at
https://github.com/SCccc21/Knowledge-Enriched-DMI.
Related papers
- Model Inversion Attacks Through Target-Specific Conditional Diffusion Models [54.69008212790426]
Model inversion attacks (MIAs) aim to reconstruct private images from a target classifier's training set, thereby raising privacy concerns in AI applications.
Previous GAN-based MIAs tend to suffer from inferior generative fidelity due to GAN's inherent flaws and biased optimization within latent space.
We propose Diffusion-based Model Inversion (Diff-MI) attacks to alleviate these issues.
arXiv Detail & Related papers (2024-07-16T06:38:49Z) - Distributional Black-Box Model Inversion Attack with Multi-Agent Reinforcement Learning [19.200221582814518]
This paper proposes a novel Distributional Black-Box Model Inversion (DBB-MI) attack by constructing the probabilistic latent space for searching the target privacy data.
As the latent probability distribution closely aligns with the target privacy data in latent space, the recovered data will leak the privacy of training samples of the target model significantly.
Experiments conducted on diverse datasets and networks show that the present DBB-MI has better performance than state-of-the-art in attack accuracy, K-nearest neighbor feature distance, and Peak Signal-to-Noise Ratio.
arXiv Detail & Related papers (2024-04-22T04:18:38Z) - Privacy Backdoors: Enhancing Membership Inference through Poisoning Pre-trained Models [112.48136829374741]
In this paper, we unveil a new vulnerability: the privacy backdoor attack.
When a victim fine-tunes a backdoored model, their training data will be leaked at a significantly higher rate than if they had fine-tuned a typical model.
Our findings highlight a critical privacy concern within the machine learning community and call for a reevaluation of safety protocols in the use of open-source pre-trained models.
arXiv Detail & Related papers (2024-04-01T16:50:54Z) - Practical Membership Inference Attacks Against Large-Scale Multi-Modal
Models: A Pilot Study [17.421886085918608]
Membership inference attacks (MIAs) aim to infer whether a data point has been used to train a machine learning model.
These attacks can be employed to identify potential privacy vulnerabilities and detect unauthorized use of personal data.
This paper takes a first step towards developing practical MIAs against large-scale multi-modal models.
arXiv Detail & Related papers (2023-09-29T19:38:40Z) - Model Inversion Attack via Dynamic Memory Learning [41.742953947551364]
Model Inversion (MI) attacks aim to recover the private training data from the target model.
Recent advances in generative adversarial models have rendered them particularly effective in MI attacks.
We propose a novel Dynamic Memory Model Inversion Attack (DMMIA) to leverage historically learned knowledge.
arXiv Detail & Related papers (2023-08-24T02:32:59Z) - Membership Inference Attacks against Synthetic Data through Overfitting
Detection [84.02632160692995]
We argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution.
We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
arXiv Detail & Related papers (2023-02-24T11:27:39Z) - Pseudo Label-Guided Model Inversion Attack via Conditional Generative
Adversarial Network [102.21368201494909]
Model inversion (MI) attacks have raised increasing concerns about privacy.
Recent MI attacks leverage a generative adversarial network (GAN) as an image prior to narrow the search space.
We propose Pseudo Label-Guided MI (PLG-MI) attack via conditional GAN (cGAN)
arXiv Detail & Related papers (2023-02-20T07:29:34Z) - Label-Only Model Inversion Attacks via Boundary Repulsion [12.374249336222906]
We introduce an algorithm to invert private training data using only the target model's predicted labels.
Using the example of face recognition, we show that the images reconstructed by BREP-MI successfully reproduce the semantics of the private training data.
arXiv Detail & Related papers (2022-03-03T18:57:57Z) - How Does Data Augmentation Affect Privacy in Machine Learning? [94.52721115660626]
We propose new MI attacks to utilize the information of augmented data.
We establish the optimal membership inference when the model is trained with augmented data.
arXiv Detail & Related papers (2020-07-21T02:21:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.