Image Transformation Network for Privacy-Preserving Deep Neural Networks
and Its Security Evaluation
- URL: http://arxiv.org/abs/2008.03143v1
- Date: Fri, 7 Aug 2020 12:58:45 GMT
- Title: Image Transformation Network for Privacy-Preserving Deep Neural Networks
and Its Security Evaluation
- Authors: Hiroki Ito, Yuma Kinoshita, Hitoshi Kiya
- Abstract summary: We propose a transformation network for generating visually-protected images for privacy-preserving DNNs.
The proposed network enables us not only to strongly protect visual information but also to maintain the image classification accuracy that using plain images achieves.
- Score: 17.134566958534634
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a transformation network for generating visually-protected images
for privacy-preserving DNNs. The proposed transformation network is trained by
using a plain image dataset so that plain images are transformed into visually
protected ones. Conventional perceptual encryption methods have a weak
visual-protection performance and some accuracy degradation in image
classification. In contrast, the proposed network enables us not only to
strongly protect visual information but also to maintain the image
classification accuracy that using plain images achieves. In an image
classification experiment, the proposed network is demonstrated to strongly
protect visual information on plain images without any performance degradation
under the use of CIFAR datasets. In addition, it is shown that the visually
protected images are robust against a DNN-based attack, called inverse
transformation network attack (ITN-Attack) in an experiment.
Related papers
- Attack GAN (AGAN ): A new Security Evaluation Tool for Perceptual Encryption [1.6385815610837167]
Training state-of-the-art (SOTA) deep learning models requires a large amount of data.
Perceptional encryption converts images into an unrecognizable format to protect the sensitive visual information in the training data.
This comes at the cost of a significant reduction in the accuracy of the models.
Adversarial Visual Information Hiding (AV IH) overcomes this drawback to protect image privacy by attempting to create encrypted images that are unrecognizable to the human eye.
arXiv Detail & Related papers (2024-07-09T06:03:32Z) - Efficient Fine-Tuning with Domain Adaptation for Privacy-Preserving
Vision Transformer [6.476298483207895]
We propose a novel method for privacy-preserving deep neural networks (DNNs) with the Vision Transformer (ViT)
The method allows us not only to train models and test with visually protected images but to also avoid the performance degradation caused from the use of encrypted images.
A domain adaptation method is used to efficiently fine-tune ViT with encrypted images.
arXiv Detail & Related papers (2024-01-10T12:46:31Z) - IMPRESS: Evaluating the Resilience of Imperceptible Perturbations
Against Unauthorized Data Usage in Diffusion-Based Generative AI [52.90082445349903]
Diffusion-based image generation models can create artistic images that mimic the style of an artist or maliciously edit the original images for fake content.
Several attempts have been made to protect the original images from such unauthorized data usage by adding imperceptible perturbations.
In this work, we introduce a purification perturbation platform, named IMPRESS, to evaluate the effectiveness of imperceptible perturbations as a protective measure.
arXiv Detail & Related papers (2023-10-30T03:33:41Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - Privacy-Preserving Image Classification Using Vision Transformer [16.679394807198]
We propose a privacy-preserving image classification method that is based on the combined use of encrypted images and the vision transformer (ViT)
ViT utilizes patch embedding and position embedding for image patches, so this architecture is shown to reduce the influence of block-wise image transformation.
In an experiment, the proposed method for privacy-preserving image classification is demonstrated to outperform state-of-the-art methods in terms of classification accuracy and robustness against various attacks.
arXiv Detail & Related papers (2022-05-24T12:51:48Z) - Privacy-Preserving Image Classification Using Isotropic Network [14.505867475659276]
We propose a privacy-preserving image classification method that uses encrypted images and an isotropic network such as the vision transformer.
The proposed method allows us not only to apply images without visual information to deep neural networks (DNNs) for both training and testing but also to maintain a high classification accuracy.
arXiv Detail & Related papers (2022-04-16T03:15:54Z) - Semantic-Aware Generation for Self-Supervised Visual Representation
Learning [116.5814634936371]
We advocate for Semantic-aware Generation (SaGe) to facilitate richer semantics rather than details to be preserved in the generated image.
SaGe complements the target network with view-specific features and thus alleviates the semantic degradation brought by intensive data augmentations.
We execute SaGe on ImageNet-1K and evaluate the pre-trained models on five downstream tasks including nearest neighbor test, linear classification, and fine-scaled image recognition.
arXiv Detail & Related papers (2021-11-25T16:46:13Z) - Towards Unsupervised Deep Image Enhancement with Generative Adversarial
Network [92.01145655155374]
We present an unsupervised image enhancement generative network (UEGAN)
It learns the corresponding image-to-image mapping from a set of images with desired characteristics in an unsupervised manner.
Results show that the proposed model effectively improves the aesthetic quality of images.
arXiv Detail & Related papers (2020-12-30T03:22:46Z) - Defending Adversarial Examples via DNN Bottleneck Reinforcement [20.08619981108837]
This paper presents a reinforcement scheme to alleviate the vulnerability of Deep Neural Networks (DNN) against adversarial attacks.
By reinforcing the former while maintaining the latter, any redundant information, be it adversarial or not, should be removed from the latent representation.
In order to reinforce the information bottleneck, we introduce the multi-scale low-pass objective and multi-scale high-frequency communication for better frequency steering in the network.
arXiv Detail & Related papers (2020-08-12T11:02:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.