Attacks on Image Encryption Schemes for Privacy-Preserving Deep Neural
Networks
- URL: http://arxiv.org/abs/2004.13263v2
- Date: Wed, 29 Apr 2020 06:22:13 GMT
- Title: Attacks on Image Encryption Schemes for Privacy-Preserving Deep Neural
Networks
- Authors: Alex Habeen Chang, Benjamin M. Case
- Abstract summary: Recent novel encryption techniques for performing machine learning using deep neural nets on images have been proposed by Tanaka and Sirichotedumrong, Kinoshita, and Kiya.
We present new chosen-plaintext and ciphertext-only attacks against both of these proposed image encryption schemes.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Privacy preserving machine learning is an active area of research usually
relying on techniques such as homomorphic encryption or secure multiparty
computation. Recent novel encryption techniques for performing machine learning
using deep neural nets on images have recently been proposed by Tanaka and
Sirichotedumrong, Kinoshita, and Kiya. We present new chosen-plaintext and
ciphertext-only attacks against both of these proposed image encryption schemes
and demonstrate the attacks' effectiveness on several examples.
Related papers
- Cryptanalysis and improvement of multimodal data encryption by
machine-learning-based system [0.0]
encryption algorithms to accommodate varied requirements of this field.
Best approach to analyzing an encryption algorithm is to identify a practical and efficient technique to break it.
arXiv Detail & Related papers (2024-02-24T10:02:21Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - Generative Model-Based Attack on Learnable Image Encryption for
Privacy-Preserving Deep Learning [14.505867475659276]
We propose a novel generative model-based attack on learnable image encryption methods proposed for privacy-preserving deep learning.
We use two state-of-the-art generative models: a StyleGAN-based model and latent diffusion-based one.
Results show that images reconstructed by the proposed method have perceptual similarities to plain images.
arXiv Detail & Related papers (2023-03-09T05:00:17Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Controlled Caption Generation for Images Through Adversarial Attacks [85.66266989600572]
We study adversarial examples for vision and language models, which typically adopt a Convolutional Neural Network (i.e., CNN) for image feature extraction and a Recurrent Neural Network (RNN) for caption generation.
In particular, we investigate attacks on the visual encoder's hidden layer that is fed to the subsequent recurrent network.
We propose a GAN-based algorithm for crafting adversarial examples for neural image captioning that mimics the internal representation of the CNN.
arXiv Detail & Related papers (2021-07-07T07:22:41Z) - TenSEAL: A Library for Encrypted Tensor Operations Using Homomorphic
Encryption [0.0]
We present TenSEAL, an open-source library for Privacy-Preserving Machine Learning using Homomorphic Encryption.
We show that an encrypted convolutional neural network can be evaluated in less than a second, using less than half a megabyte of communication.
arXiv Detail & Related papers (2021-04-07T14:32:38Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - AdvFoolGen: Creating Persistent Troubles for Deep Classifiers [17.709146615433458]
We present a new black-box attack termed AdvFoolGen, which can generate attacking images from the same feature space as that of the natural images.
We demonstrate the effectiveness and robustness of our attack in the face of state-of-the-art defense techniques.
arXiv Detail & Related papers (2020-07-20T21:27:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.