A Fusion-Denoising Attack on InstaHide with Data Augmentation
- URL: http://arxiv.org/abs/2105.07754v1
- Date: Mon, 17 May 2021 11:58:16 GMT
- Title: A Fusion-Denoising Attack on InstaHide with Data Augmentation
- Authors: Xinjian Luo, Xiaokui Xiao, Yuncheng Wu, Juncheng Liu, Beng Chin Ooi
- Abstract summary: InstaHide is a mechanism for protecting private training images in collaborative learning.
In recent work, Carlini et al. show that it is possible to reconstruct private images from the encrypted dataset generated by InstaHide.
This paper presents an attack for recovering private images from the outputs of InstaHide even when data augmentation is present.
- Score: 22.841904122807488
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: InstaHide is a state-of-the-art mechanism for protecting private training
images in collaborative learning. It works by mixing multiple private images
and modifying them in such a way that their visual features are no longer
distinguishable to the naked eye, without significantly degrading the accuracy
of training. In recent work, however, Carlini et al. show that it is possible
to reconstruct private images from the encrypted dataset generated by
InstaHide, by exploiting the correlations among the encrypted images.
Nevertheless, Carlini et al.'s attack relies on the assumption that each
private image is used without modification when mixing up with other private
images. As a consequence, it could be easily defeated by incorporating data
augmentation into InstaHide. This leads to a natural question: is InstaHide
with data augmentation secure?
This paper provides a negative answer to the above question, by present an
attack for recovering private images from the outputs of InstaHide even when
data augmentation is present. The basic idea of our attack is to use a
comparative network to identify encrypted images that are likely to correspond
to the same private image, and then employ a fusion-denoising network for
restoring the private image from the encrypted ones, taking into account the
effects of data augmentation. Extensive experiments demonstrate the
effectiveness of the proposed attack in comparison to Carlini et al.'s attack.
Related papers
- Clean Image May be Dangerous: Data Poisoning Attacks Against Deep Hashing [71.30876587855867]
We show that even clean query images can be dangerous, inducing malicious target retrieval results, like undesired or illegal images.<n>Specifically, we first train a surrogate model to simulate the behavior of the target deep hashing model.<n>Then, a strict gradient matching strategy is proposed to generate the poisoned images.
arXiv Detail & Related papers (2025-03-27T07:54:27Z) - Federated Learning Nodes Can Reconstruct Peers' Image Data [27.92271597111756]
Federated learning (FL) is a privacy-preserving machine learning framework that enables multiple nodes to train models on their local data.
Prior work has shown that the gradient-sharing steps in FL can be vulnerable to data reconstruction attacks from an honest-but-curious central server.
We show that an honest-but-curious node/client can also launch attacks to reconstruct peers' image data in a centralized system, presenting a severe privacy risk.
arXiv Detail & Related papers (2024-10-07T00:18:35Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - Generative Model-Based Attack on Learnable Image Encryption for
Privacy-Preserving Deep Learning [14.505867475659276]
We propose a novel generative model-based attack on learnable image encryption methods proposed for privacy-preserving deep learning.
We use two state-of-the-art generative models: a StyleGAN-based model and latent diffusion-based one.
Results show that images reconstructed by the proposed method have perceptual similarities to plain images.
arXiv Detail & Related papers (2023-03-09T05:00:17Z) - ConfounderGAN: Protecting Image Data Privacy with Causal Confounder [85.6757153033139]
We propose ConfounderGAN, a generative adversarial network (GAN) that can make personal image data unlearnable to protect the data privacy of its owners.
Experiments are conducted in six image classification datasets, consisting of three natural object datasets and three medical datasets.
arXiv Detail & Related papers (2022-12-04T08:49:14Z) - Syfer: Neural Obfuscation for Private Data Release [58.490998583666276]
We develop Syfer, a neural obfuscation method to protect against re-identification attacks.
Syfer composes trained layers with random neural networks to encode the original data.
It maintains the ability to predict diagnoses from the encoded data.
arXiv Detail & Related papers (2022-01-28T20:32:04Z) - InstaHide's Sample Complexity When Mixing Two Private Images [14.861717977097417]
InstaHide is a scheme to protect training data privacy with only minor effects on test accuracy.
We study recent attacks on InstaHide and present a unified framework to understand and analyze these attacks.
Our results demonstrate that InstaHide is not information-theoretically secure but computationally secure in the worst case.
arXiv Detail & Related papers (2020-11-24T03:41:03Z) - InstaHide: Instance-hiding Schemes for Private Distributed Learning [45.26955355159282]
InstaHide is a simple encryption of training images, which can be plugged into existing distributed deep learning pipelines.
InstaHide encrypts each training image with a "one-time secret key" which consists of mixing a number of randomly chosen images.
arXiv Detail & Related papers (2020-10-06T14:43:23Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.