Generative Model-Based Attack on Learnable Image Encryption for
Privacy-Preserving Deep Learning
- URL: http://arxiv.org/abs/2303.05036v1
- Date: Thu, 9 Mar 2023 05:00:17 GMT
- Title: Generative Model-Based Attack on Learnable Image Encryption for
Privacy-Preserving Deep Learning
- Authors: AprilPyone MaungMaung and Hitoshi Kiya
- Abstract summary: We propose a novel generative model-based attack on learnable image encryption methods proposed for privacy-preserving deep learning.
We use two state-of-the-art generative models: a StyleGAN-based model and latent diffusion-based one.
Results show that images reconstructed by the proposed method have perceptual similarities to plain images.
- Score: 14.505867475659276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel generative model-based attack on learnable
image encryption methods proposed for privacy-preserving deep learning. Various
learnable encryption methods have been studied to protect the sensitive visual
information of plain images, and some of them have been investigated to be
robust enough against all existing attacks. However, previous attacks on image
encryption focus only on traditional cryptanalytic attacks or reverse
translation models, so these attacks cannot recover any visual information if a
block-scrambling encryption step, which effectively destroys global
information, is applied. Accordingly, in this paper, generative models are
explored to evaluate whether such models can restore sensitive visual
information from encrypted images for the first time. We first point out that
encrypted images have some similarity with plain images in the embedding space.
By taking advantage of leaked information from encrypted images, we propose a
guided generative model as an attack on learnable image encryption to recover
personally identifiable visual information. We implement the proposed attack in
two ways by utilizing two state-of-the-art generative models: a StyleGAN-based
model and latent diffusion-based one. Experiments were carried out on the
CelebA-HQ and ImageNet datasets. Results show that images reconstructed by the
proposed method have perceptual similarities to plain images.
Related papers
- Unveiling Hidden Visual Information: A Reconstruction Attack Against Adversarial Visual Information Hiding [6.649753747542211]
A representative image encryption method is the adversarial visual information hiding (AVIH)
In the AVIH method, the type-I adversarial example approach creates images that appear completely different but are still recognized by machines as the original ones.
We introduce a dual-strategy DR attack against the AVIH encryption method by incorporating generative-adversarial loss and (2) augmented identity loss.
arXiv Detail & Related papers (2024-08-08T06:58:48Z) - Attack GAN (AGAN ): A new Security Evaluation Tool for Perceptual Encryption [1.6385815610837167]
Training state-of-the-art (SOTA) deep learning models requires a large amount of data.
Perceptional encryption converts images into an unrecognizable format to protect the sensitive visual information in the training data.
This comes at the cost of a significant reduction in the accuracy of the models.
Adversarial Visual Information Hiding (AV IH) overcomes this drawback to protect image privacy by attempting to create encrypted images that are unrecognizable to the human eye.
arXiv Detail & Related papers (2024-07-09T06:03:32Z) - Recoverable Privacy-Preserving Image Classification through Noise-like
Adversarial Examples [26.026171363346975]
Cloud-based image related services such as classification have become crucial.
In this study, we propose a novel privacypreserving image classification scheme.
encrypted images can be decrypted back into their original form with high fidelity (recoverable) using a secret key.
arXiv Detail & Related papers (2023-10-19T13:01:58Z) - BAGM: A Backdoor Attack for Manipulating Text-to-Image Generative Models [54.19289900203071]
The rise in popularity of text-to-image generative artificial intelligence has attracted widespread public interest.
We demonstrate that this technology can be attacked to generate content that subtly manipulates its users.
We propose a Backdoor Attack on text-to-image Generative Models (BAGM)
Our attack is the first to target three popular text-to-image generative models across three stages of the generative process.
arXiv Detail & Related papers (2023-07-31T08:34:24Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - DIAGNOSIS: Detecting Unauthorized Data Usages in Text-to-image Diffusion Models [79.71665540122498]
We propose a method for detecting unauthorized data usage by planting the injected content into the protected dataset.
Specifically, we modify the protected images by adding unique contents on these images using stealthy image warping functions.
By analyzing whether the model has memorized the injected content, we can detect models that had illegally utilized the unauthorized data.
arXiv Detail & Related papers (2023-07-06T16:27:39Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - StyleGAN Encoder-Based Attack for Block Scrambled Face Images [14.505867475659276]
We propose an attack method to block scrambled face images, particularly Encryption-then-Compression (EtC) applied images.
Instead of reconstructing identical images as plain ones from encrypted images, we focus on recovering styles that can reveal identifiable information from the encrypted images.
While state-of-the-art attack methods cannot recover any perceptual information from EtC images, the proposed method discloses personally identifiable information such as hair color, skin color, eyeglasses, gender, etc.
arXiv Detail & Related papers (2022-09-16T14:12:39Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.