Recoverable Privacy-Preserving Image Classification through Noise-like
Adversarial Examples
- URL: http://arxiv.org/abs/2310.12707v1
- Date: Thu, 19 Oct 2023 13:01:58 GMT
- Title: Recoverable Privacy-Preserving Image Classification through Noise-like
Adversarial Examples
- Authors: Jun Liu, Jiantao Zhou, Jinyu Tian, Weiwei Sun
- Abstract summary: Cloud-based image related services such as classification have become crucial.
In this study, we propose a novel privacypreserving image classification scheme.
encrypted images can be decrypted back into their original form with high fidelity (recoverable) using a secret key.
- Score: 26.026171363346975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the increasing prevalence of cloud computing platforms, ensuring data
privacy during the cloud-based image related services such as classification
has become crucial. In this study, we propose a novel privacypreserving image
classification scheme that enables the direct application of classifiers
trained in the plaintext domain to classify encrypted images, without the need
of retraining a dedicated classifier. Moreover, encrypted images can be
decrypted back into their original form with high fidelity (recoverable) using
a secret key. Specifically, our proposed scheme involves utilizing a feature
extractor and an encoder to mask the plaintext image through a newly designed
Noise-like Adversarial Example (NAE). Such an NAE not only introduces a
noise-like visual appearance to the encrypted image but also compels the target
classifier to predict the ciphertext as the same label as the original
plaintext image. At the decoding phase, we adopt a Symmetric Residual Learning
(SRL) framework for restoring the plaintext image with minimal degradation.
Extensive experiments demonstrate that 1) the classification accuracy of the
classifier trained in the plaintext domain remains the same in both the
ciphertext and plaintext domains; 2) the encrypted images can be recovered into
their original form with an average PSNR of up to 51+ dB for the SVHN dataset
and 48+ dB for the VGGFace2 dataset; 3) our system exhibits satisfactory
generalization capability on the encryption, decryption and classification
tasks across datasets that are different from the training one; and 4) a
high-level of security is achieved against three potential threat models. The
code is available at https://github.com/csjunjun/RIC.git.
Related papers
- PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - Discriminative Class Tokens for Text-to-Image Diffusion Models [107.98436819341592]
We propose a non-invasive fine-tuning technique that capitalizes on the expressive potential of free-form text.
Our method is fast compared to prior fine-tuning methods and does not require a collection of in-class images.
We evaluate our method extensively, showing that the generated images are: (i) more accurate and of higher quality than standard diffusion models, (ii) can be used to augment training data in a low-resource setting, and (iii) reveal information about the data used to train the guiding classifier.
arXiv Detail & Related papers (2023-03-30T05:25:20Z) - Generative Model-Based Attack on Learnable Image Encryption for
Privacy-Preserving Deep Learning [14.505867475659276]
We propose a novel generative model-based attack on learnable image encryption methods proposed for privacy-preserving deep learning.
We use two state-of-the-art generative models: a StyleGAN-based model and latent diffusion-based one.
Results show that images reconstructed by the proposed method have perceptual similarities to plain images.
arXiv Detail & Related papers (2023-03-09T05:00:17Z) - ConfounderGAN: Protecting Image Data Privacy with Causal Confounder [85.6757153033139]
We propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners.
Experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets.
arXiv Detail & Related papers (2022-12-04T08:49:14Z) - Hiding Images in Deep Probabilistic Models [58.23127414572098]
We describe a different computational framework to hide images in deep probabilistic models.
Specifically, we use a DNN to model the probability density of cover images, and hide a secret image in one particular location of the learned distribution.
We demonstrate the feasibility of our SinGAN approach in terms of extraction accuracy and model security.
arXiv Detail & Related papers (2022-10-05T13:33:25Z) - EViT: Privacy-Preserving Image Retrieval via Encrypted Vision
Transformer in Cloud Computing [9.41257807502252]
We propose a novel paradigm named Encrypted Vision Transformer (EViT), which advances the discriminative representations capability of cipher-images.
EViT achieves both excellent encryption and retrieval performance, outperforming current schemes in terms of retrieval accuracy by large margins while protecting image privacy effectively.
arXiv Detail & Related papers (2022-08-31T07:07:21Z) - Privacy Safe Representation Learning via Frequency Filtering Encoder [7.792424517008007]
Adversarial Representation Learning (ARL) is a common approach to train an encoder that runs on the client-side and obfuscates an image.
It is assumed, that the obfuscated image can safely be transmitted and used for the task on the server without privacy concerns.
We introduce a novel ARL method enhanced through low-pass filtering, limiting the available information amount to be encoded in the frequency domain.
arXiv Detail & Related papers (2022-08-04T06:16:13Z) - Privacy-Preserving Image Classification Using Isotropic Network [14.505867475659276]
We propose a privacy-preserving image classification method that uses encrypted images and an isotropic network such as the vision transformer.
The proposed method allows us not only to apply images without visual information to deep neural networks (DNNs) for both training and testing but also to maintain a high classification accuracy.
arXiv Detail & Related papers (2022-04-16T03:15:54Z) - No Token Left Behind: Explainability-Aided Image Classification and
Generation [79.4957965474334]
We present a novel explainability-based approach, which adds a loss term to ensure that CLIP focuses on all relevant semantic parts of the input.
Our method yields an improvement in the recognition rate, without additional training or fine-tuning.
arXiv Detail & Related papers (2022-04-11T07:16:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.