Unveiling Hidden Visual Information: A Reconstruction Attack Against Adversarial Visual Information Hiding
- URL: http://arxiv.org/abs/2408.04261v1
- Date: Thu, 8 Aug 2024 06:58:48 GMT
- Title: Unveiling Hidden Visual Information: A Reconstruction Attack Against Adversarial Visual Information Hiding
- Authors: Jonggyu Jang, Hyeonsu Lyu, Seongjin Hwang, Hyun Jong Yang,
- Abstract summary: A representative image encryption method is the adversarial visual information hiding (AVIH)
In the AVIH method, the type-I adversarial example approach creates images that appear completely different but are still recognized by machines as the original ones.
We introduce a dual-strategy DR attack against the AVIH encryption method by incorporating generative-adversarial loss and (2) augmented identity loss.
- Score: 6.649753747542211
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates the security vulnerabilities of adversarial-example-based image encryption by executing data reconstruction (DR) attacks on encrypted images. A representative image encryption method is the adversarial visual information hiding (AVIH), which uses type-I adversarial example training to protect gallery datasets used in image recognition tasks. In the AVIH method, the type-I adversarial example approach creates images that appear completely different but are still recognized by machines as the original ones. Additionally, the AVIH method can restore encrypted images to their original forms using a predefined private key generative model. For the best security, assigning a unique key to each image is recommended; however, storage limitations may necessitate some images sharing the same key model. This raises a crucial security question for AVIH: How many images can safely share the same key model without being compromised by a DR attack? To address this question, we introduce a dual-strategy DR attack against the AVIH encryption method by incorporating (1) generative-adversarial loss and (2) augmented identity loss, which prevent DR from overfitting -- an issue akin to that in machine learning. Our numerical results validate this approach through image recognition and re-identification benchmarks, demonstrating that our strategy can significantly enhance the quality of reconstructed images, thereby requiring fewer key-sharing encrypted images. Our source code to reproduce our results will be available soon.
Related papers
- Attack GAN (AGAN ): A new Security Evaluation Tool for Perceptual Encryption [1.6385815610837167]
Training state-of-the-art (SOTA) deep learning models requires a large amount of data.
Perceptional encryption converts images into an unrecognizable format to protect the sensitive visual information in the training data.
This comes at the cost of a significant reduction in the accuracy of the models.
Adversarial Visual Information Hiding (AV IH) overcomes this drawback to protect image privacy by attempting to create encrypted images that are unrecognizable to the human eye.
arXiv Detail & Related papers (2024-07-09T06:03:32Z) - Recoverable Privacy-Preserving Image Classification through Noise-like
Adversarial Examples [26.026171363346975]
Cloud-based image related services such as classification have become crucial.
In this study, we propose a novel privacypreserving image classification scheme.
encrypted images can be decrypted back into their original form with high fidelity (recoverable) using a secret key.
arXiv Detail & Related papers (2023-10-19T13:01:58Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - DiffProtect: Generate Adversarial Examples with Diffusion Models for
Facial Privacy Protection [64.77548539959501]
DiffProtect produces more natural-looking encrypted images than state-of-the-art methods.
It achieves significantly higher attack success rates, e.g., 24.5% and 25.1% absolute improvements on the CelebA-HQ and FFHQ datasets.
arXiv Detail & Related papers (2023-05-23T02:45:49Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - RiDDLE: Reversible and Diversified De-identification with Latent
Encryptor [57.66174700276893]
This work presents RiDDLE, short for Reversible and Diversified De-identification with Latent Encryptor.
Built upon a pre-learned StyleGAN2 generator, RiDDLE manages to encrypt and decrypt the facial identity within the latent space.
arXiv Detail & Related papers (2023-03-09T11:03:52Z) - Generative Model-Based Attack on Learnable Image Encryption for
Privacy-Preserving Deep Learning [14.505867475659276]
We propose a novel generative model-based attack on learnable image encryption methods proposed for privacy-preserving deep learning.
We use two state-of-the-art generative models: a StyleGAN-based model and latent diffusion-based one.
Results show that images reconstructed by the proposed method have perceptual similarities to plain images.
arXiv Detail & Related papers (2023-03-09T05:00:17Z) - StyleGAN Encoder-Based Attack for Block Scrambled Face Images [14.505867475659276]
We propose an attack method to block scrambled face images, particularly Encryption-then-Compression (EtC) applied images.
Instead of reconstructing identical images as plain ones from encrypted images, we focus on recovering styles that can reveal identifiable information from the encrypted images.
While state-of-the-art attack methods cannot recover any perceptual information from EtC images, the proposed method discloses personally identifiable information such as hair color, skin color, eyeglasses, gender, etc.
arXiv Detail & Related papers (2022-09-16T14:12:39Z) - Privacy Safe Representation Learning via Frequency Filtering Encoder [7.792424517008007]
Adversarial Representation Learning (ARL) is a common approach to train an encoder that runs on the client-side and obfuscates an image.
It is assumed, that the obfuscated image can safely be transmitted and used for the task on the server without privacy concerns.
We introduce a novel ARL method enhanced through low-pass filtering, limiting the available information amount to be encoded in the frequency domain.
arXiv Detail & Related papers (2022-08-04T06:16:13Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.