Patch-MI: Enhancing Model Inversion Attacks via Patch-Based
Reconstruction
- URL: http://arxiv.org/abs/2312.07040v1
- Date: Tue, 12 Dec 2023 07:52:35 GMT
- Title: Patch-MI: Enhancing Model Inversion Attacks via Patch-Based
Reconstruction
- Authors: Jonggyu Jang, Hyeonsu Lyu, Hyun Jong Yang
- Abstract summary: We introduce a groundbreaking approach named Patch-MI, inspired by jigsaw puzzle assembly.
We build upon a new probabilistic interpretation of MI attacks, employing a generative adversarial network (GAN)-like framework with a patch-based discriminator.
Our numerical and graphical findings demonstrate that Patch-MI surpasses existing generative MI methods in terms of accuracy.
- Score: 8.164433158925593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model inversion (MI) attacks aim to reveal sensitive information in training
datasets by solely accessing model weights. Generative MI attacks, a prominent
strand in this field, utilize auxiliary datasets to recreate target data
attributes, restricting the images to remain photo-realistic, but their success
often depends on the similarity between auxiliary and target datasets. If the
distributions are dissimilar, existing MI attack attempts frequently fail,
yielding unrealistic or target-unrelated results. In response to these
challenges, we introduce a groundbreaking approach named Patch-MI, inspired by
jigsaw puzzle assembly. To this end, we build upon a new probabilistic
interpretation of MI attacks, employing a generative adversarial network
(GAN)-like framework with a patch-based discriminator. This approach allows the
synthesis of images that are similar to the target dataset distribution, even
in cases of dissimilar auxiliary dataset distribution. Moreover, we artfully
employ a random transformation block, a sophisticated maneuver that crafts
generalized images, thus enhancing the efficacy of the target classifier. Our
numerical and graphical findings demonstrate that Patch-MI surpasses existing
generative MI methods in terms of accuracy, marking significant advancements
while preserving comparable statistical dataset quality. For reproducibility of
our results, we make our source code publicly available in
https://github.com/jonggyujang0123/Patch-Attack.
Related papers
- Model Inversion Attacks Through Target-Specific Conditional Diffusion Models [54.69008212790426]
Model inversion attacks (MIAs) aim to reconstruct private images from a target classifier's training set, thereby raising privacy concerns in AI applications.
Previous GAN-based MIAs tend to suffer from inferior generative fidelity due to GAN's inherent flaws and biased optimization within latent space.
We propose Diffusion-based Model Inversion (Diff-MI) attacks to alleviate these issues.
arXiv Detail & Related papers (2024-07-16T06:38:49Z) - Prediction Exposes Your Face: Black-box Model Inversion via Prediction Alignment [24.049615035939237]
Model inversion (MI) attack reconstructs the private training data of a target model given its output.
We propose a novel Prediction-to-ImageP2I method for black-box MI attack.
Our method improves attack accuracy by 8.5% and reduces query numbers by 99% on dataset CelebA.
arXiv Detail & Related papers (2024-07-11T01:58:35Z) - Practical Membership Inference Attacks Against Large-Scale Multi-Modal
Models: A Pilot Study [17.421886085918608]
Membership inference attacks (MIAs) aim to infer whether a data point has been used to train a machine learning model.
These attacks can be employed to identify potential privacy vulnerabilities and detect unauthorized use of personal data.
This paper takes a first step towards developing practical MIAs against large-scale multi-modal models.
arXiv Detail & Related papers (2023-09-29T19:38:40Z) - Unstoppable Attack: Label-Only Model Inversion via Conditional Diffusion
Model [14.834360664780709]
Model attacks (MIAs) aim to recover private data from inaccessible training sets of deep learning models.
This paper develops a novel MIA method, leveraging a conditional diffusion model (CDM) to recover representative samples under the target label.
Experimental results show that this method can generate similar and accurate samples to the target label, outperforming generators of previous approaches.
arXiv Detail & Related papers (2023-07-17T12:14:24Z) - PixMIM: Rethinking Pixel Reconstruction in Masked Image Modeling [83.67628239775878]
Masked Image Modeling (MIM) has achieved promising progress with the advent of Masked Autoencoders (MAE) and BEiT.
This paper undertakes a fundamental analysis of MIM from the perspective of pixel reconstruction.
We propose a remarkably simple and effective method, ourmethod, that entails two strategies.
arXiv Detail & Related papers (2023-03-04T13:38:51Z) - Pseudo Label-Guided Model Inversion Attack via Conditional Generative
Adversarial Network [102.21368201494909]
Model inversion (MI) attacks have raised increasing concerns about privacy.
Recent MI attacks leverage a generative adversarial network (GAN) as an image prior to narrow the search space.
We propose Pseudo Label-Guided MI (PLG-MI) attack via conditional GAN (cGAN)
arXiv Detail & Related papers (2023-02-20T07:29:34Z) - Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks [13.374754708543449]
Model attacks (MIAs) aim to create synthetic images that reflect the class-wise characteristics from a target inversion's training data by exploiting the model's learned knowledge.
Previous research has developed generative MIAs using generative adversarial networks (GANs) as image priors tailored to a specific target model.
We present Plug & Play Attacks that loosen the dependency between the target model and image prior and enable the use of a single trained GAN to attack a broad range of targets.
arXiv Detail & Related papers (2022-01-28T15:25:50Z) - Reconstructing Training Data from Diverse ML Models by Ensemble
Inversion [8.414622657659168]
Model Inversion (MI), in which an adversary abuses access to a trained Machine Learning (ML) model, has attracted increasing research attention.
We propose an ensemble inversion technique that estimates the distribution of original training data by training a generator constrained by an ensemble of trained models.
We achieve high quality results without any dataset and show how utilizing an auxiliary dataset that's similar to the presumed training data improves the results.
arXiv Detail & Related papers (2021-11-05T18:59:01Z) - Delving into Data: Effectively Substitute Training for Black-box Attack [84.85798059317963]
We propose a novel perspective substitute training that focuses on designing the distribution of data used in the knowledge stealing process.
The combination of these two modules can further boost the consistency of the substitute model and target model, which greatly improves the effectiveness of adversarial attack.
arXiv Detail & Related papers (2021-04-26T07:26:29Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z) - How Does Data Augmentation Affect Privacy in Machine Learning? [94.52721115660626]
We propose new MI attacks to utilize the information of augmented data.
We establish the optimal membership inference when the model is trained with augmented data.
arXiv Detail & Related papers (2020-07-21T02:21:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.