Patch-MI: Enhancing Model Inversion Attacks via Patch-Based
Reconstruction
- URL: http://arxiv.org/abs/2312.07040v1
- Date: Tue, 12 Dec 2023 07:52:35 GMT
- Title: Patch-MI: Enhancing Model Inversion Attacks via Patch-Based
Reconstruction
- Authors: Jonggyu Jang, Hyeonsu Lyu, Hyun Jong Yang
- Abstract summary: We introduce a groundbreaking approach named Patch-MI, inspired by jigsaw puzzle assembly.
We build upon a new probabilistic interpretation of MI attacks, employing a generative adversarial network (GAN)-like framework with a patch-based discriminator.
Our numerical and graphical findings demonstrate that Patch-MI surpasses existing generative MI methods in terms of accuracy.
- Score: 8.164433158925593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Model inversion (MI) attacks aim to reveal sensitive information in training
datasets by solely accessing model weights. Generative MI attacks, a prominent
strand in this field, utilize auxiliary datasets to recreate target data
attributes, restricting the images to remain photo-realistic, but their success
often depends on the similarity between auxiliary and target datasets. If the
distributions are dissimilar, existing MI attack attempts frequently fail,
yielding unrealistic or target-unrelated results. In response to these
challenges, we introduce a groundbreaking approach named Patch-MI, inspired by
jigsaw puzzle assembly. To this end, we build upon a new probabilistic
interpretation of MI attacks, employing a generative adversarial network
(GAN)-like framework with a patch-based discriminator. This approach allows the
synthesis of images that are similar to the target dataset distribution, even
in cases of dissimilar auxiliary dataset distribution. Moreover, we artfully
employ a random transformation block, a sophisticated maneuver that crafts
generalized images, thus enhancing the efficacy of the target classifier. Our
numerical and graphical findings demonstrate that Patch-MI surpasses existing
generative MI methods in terms of accuracy, marking significant advancements
while preserving comparable statistical dataset quality. For reproducibility of
our results, we make our source code publicly available in
https://github.com/jonggyujang0123/Patch-Attack.
Related papers
- Model Inversion Attacks Through Target-Specific Conditional Diffusion Models [54.69008212790426]
Model attacks (MIAs) aim to reconstruct private images from a target classifier's training set, thereby raising privacy concerns in AI applications.
Previous GAN-based MIAs tend to suffer from inferior generative fidelity due to GAN's inherent flaws and biased optimization within latent space.
We propose Diffusion-based Model Inversion (Diff-MI) attacks to alleviate these issues.
arXiv Detail & Related papers (2024-07-16T06:38:49Z) - Breaking the Black-Box: Confidence-Guided Model Inversion Attack for
Distribution Shift [0.46040036610482665]
Model inversion attacks (MIAs) seek to infer the private training data of a target classifier by generating synthetic images that reflect the characteristics of the target class.
Previous studies have relied on full access to the target model, which is not practical in real-world scenarios.
This paper proposes a textbfConfidence-textbfGuided textbfModel textbfInversion attack method called CG-MI.
arXiv Detail & Related papers (2024-02-28T03:47:17Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - Pseudo Label-Guided Model Inversion Attack via Conditional Generative
Adversarial Network [102.21368201494909]
Model inversion (MI) attacks have raised increasing concerns about privacy.
Recent MI attacks leverage a generative adversarial network (GAN) as an image prior to narrow the search space.
We propose Pseudo Label-Guided MI (PLG-MI) attack via conditional GAN (cGAN)
arXiv Detail & Related papers (2023-02-20T07:29:34Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Meta Generative Attack on Person Reidentification [0.0]
We propose a method with the goal of achieving better transferability against different models and across datasets.
We generate a mask to obtain better performance across models and use meta learning to boost the generalizability in the challenging cross-dataset cross-model setting.
arXiv Detail & Related papers (2023-01-16T07:08:51Z) - Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks [13.374754708543449]
Model attacks (MIAs) aim to create synthetic images that reflect the class-wise characteristics from a target inversion's training data by exploiting the model's learned knowledge.
Previous research has developed generative MIAs using generative adversarial networks (GANs) as image priors tailored to a specific target model.
We present Plug & Play Attacks that loosen the dependency between the target model and image prior and enable the use of a single trained GAN to attack a broad range of targets.
arXiv Detail & Related papers (2022-01-28T15:25:50Z) - A Hierarchical Transformation-Discriminating Generative Model for Few
Shot Anomaly Detection [93.38607559281601]
We devise a hierarchical generative model that captures the multi-scale patch distribution of each training image.
The anomaly score is obtained by aggregating the patch-based votes of the correct transformation across scales and image regions.
arXiv Detail & Related papers (2021-04-29T17:49:48Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.