InstaHide's Sample Complexity When Mixing Two Private Images
- URL: http://arxiv.org/abs/2011.11877v2
- Date: Tue, 6 Feb 2024 03:14:09 GMT
- Title: InstaHide's Sample Complexity When Mixing Two Private Images
- Authors: Baihe Huang, Zhao Song, Runzhou Tao, Junze Yin, Ruizhe Zhang, Danyang
Zhuo
- Abstract summary: InstaHide is a scheme to protect training data privacy with only minor effects on test accuracy.
We study recent attacks on InstaHide and present a unified framework to understand and analyze these attacks.
Our results demonstrate that InstaHide is not information-theoretically secure but computationally secure in the worst case.
- Score: 14.861717977097417
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Training neural networks usually require large numbers of sensitive training
data, and how to protect the privacy of training data has thus become a
critical topic in deep learning research. InstaHide is a state-of-the-art
scheme to protect training data privacy with only minor effects on test
accuracy, and its security has become a salient question. In this paper, we
systematically study recent attacks on InstaHide and present a unified
framework to understand and analyze these attacks. We find that existing
attacks either do not have a provable guarantee or can only recover a single
private image. On the current InstaHide challenge setup, where each InstaHide
image is a mixture of two private images, we present a new algorithm to recover
all the private images with a provable guarantee and optimal sample complexity.
In addition, we also provide a computational hardness result on retrieving all
InstaHide images. Our results demonstrate that InstaHide is not
information-theoretically secure but computationally secure in the worst case,
even when mixing two private images.
Related papers
- Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - PRO-Face S: Privacy-preserving Reversible Obfuscation of Face Images via
Secure Flow [69.78820726573935]
We name it PRO-Face S, short for Privacy-preserving Reversible Obfuscation of Face images via Secure flow-based model.
In the framework, an Invertible Neural Network (INN) is utilized to process the input image along with its pre-obfuscated form, and generate the privacy protected image that visually approximates to the pre-obfuscated one.
arXiv Detail & Related papers (2023-07-18T10:55:54Z) - Vision Through the Veil: Differential Privacy in Federated Learning for
Medical Image Classification [15.382184404673389]
The proliferation of deep learning applications in healthcare calls for data aggregation across various institutions.
Privacy-preserving mechanisms are paramount in medical image analysis, where the data being sensitive in nature.
This study addresses the need by integrating differential privacy, a leading privacy-preserving technique, into a federated learning framework for medical image classification.
arXiv Detail & Related papers (2023-06-30T16:48:58Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - A Fusion-Denoising Attack on InstaHide with Data Augmentation [22.841904122807488]
InstaHide is a mechanism for protecting private training images in collaborative learning.
In recent work, Carlini et al. show that it is possible to reconstruct private images from the encrypted dataset generated by InstaHide.
This paper presents an attack for recovering private images from the outputs of InstaHide even when data augmentation is present.
arXiv Detail & Related papers (2021-05-17T11:58:16Z) - InstaHide: Instance-hiding Schemes for Private Distributed Learning [45.26955355159282]
InstaHide is a simple encryption of training images, which can be plugged into existing distributed deep learning pipelines.
InstaHide encrypts each training image with a "one-time secret key" which consists of mixing a number of randomly chosen images.
arXiv Detail & Related papers (2020-10-06T14:43:23Z) - Toward Privacy and Utility Preserving Image Representation [26.768476643200664]
We study the novel problem of creating privacy-preserving image representations with respect to a given utility task.
We propose a principled framework called the Adversarial Image Anonymizer (AIA)
AIA first creates an image representation using a generative model, then enhances the learned image representations using adversarial learning to preserve privacy and utility for a given task.
arXiv Detail & Related papers (2020-09-30T01:25:00Z) - InfoScrub: Towards Attribute Privacy by Targeted Obfuscation [77.49428268918703]
We study techniques that allow individuals to limit the private information leaked in visual data.
We tackle this problem in a novel image obfuscation framework.
We find our approach generates obfuscated images faithful to the original input images, and additionally increase uncertainty by 6.2$times$ (or up to 0.85 bits) over the non-obfuscated counterparts.
arXiv Detail & Related papers (2020-05-20T19:48:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.