Oriole: Thwarting Privacy against Trustworthy Deep Learning Models
- URL: http://arxiv.org/abs/2102.11502v1
- Date: Tue, 23 Feb 2021 05:33:55 GMT
- Title: Oriole: Thwarting Privacy against Trustworthy Deep Learning Models
- Authors: Liuqiao Chen, Hu Wang, Benjamin Zi Hao Zhao, Minhui Xue and Haifeng
Qian
- Abstract summary: We present Oriole, a system that combines the advantages of data poisoning attacks and evasion attacks.
Our proposed Oriole system is able to effectively interfere with the performance of the Fawkes system to achieve promising attacking results.
- Score: 16.224149190291048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep Neural Networks have achieved unprecedented success in the field of face
recognition such that any individual can crawl the data of others from the
Internet without their explicit permission for the purpose of training
high-precision face recognition models, creating a serious violation of
privacy. Recently, a well-known system named Fawkes (published in USENIX
Security 2020) claimed this privacy threat can be neutralized by uploading
cloaked user images instead of their original images. In this paper, we present
Oriole, a system that combines the advantages of data poisoning attacks and
evasion attacks, to thwart the protection offered by Fawkes, by training the
attacker face recognition model with multi-cloaked images generated by Oriole.
Consequently, the face recognition accuracy of the attack model is maintained
and the weaknesses of Fawkes are revealed. Experimental results show that our
proposed Oriole system is able to effectively interfere with the performance of
the Fawkes system to achieve promising attacking results. Our ablation study
highlights multiple principal factors that affect the performance of the Oriole
system, including the DSSIM perturbation budget, the ratio of leaked clean user
images, and the numbers of multi-cloaks for each uncloaked image. We also
identify and discuss at length the vulnerabilities of Fawkes. We hope that the
new methodology presented in this paper will inform the security community of a
need to design more robust privacy-preserving deep learning models.
Related papers
- Anonymization Prompt Learning for Facial Privacy-Preserving Text-to-Image Generation [56.46932751058042]
We train a learnable prompt prefix for text-to-image diffusion models, which forces the model to generate anonymized facial identities.
Experiments demonstrate the successful anonymization performance of APL, which anonymizes any specific individuals without compromising the quality of non-identity-specific image generation.
arXiv Detail & Related papers (2024-05-27T07:38:26Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Face Anonymization by Manipulating Decoupled Identity Representation [5.26916168336451]
We propose a novel approach which protects identity information of facial images from leakage with slightest modification.
Specifically, we disentangle identity representation from other facial attributes leveraging the power of generative adversarial networks.
We evaulate the disentangle ability of our model, and propose an effective method for identity anonymization, namely Anonymous Identity Generation (AIG)
arXiv Detail & Related papers (2021-05-24T07:39:54Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z) - Fawkes: Protecting Privacy against Unauthorized Deep Learning Models [34.04323550970413]
Fawkes is a system that helps individuals inoculate their images against unauthorized facial recognition models.
We experimentally demonstrate that Fawkes provides 95+% protection against user recognition.
We achieve 100% success in experiments against today's state-of-the-art facial recognition services.
arXiv Detail & Related papers (2020-02-19T18:00:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.