Fawkes: Protecting Privacy against Unauthorized Deep Learning Models
- URL: http://arxiv.org/abs/2002.08327v2
- Date: Tue, 23 Jun 2020 03:54:20 GMT
- Title: Fawkes: Protecting Privacy against Unauthorized Deep Learning Models
- Authors: Shawn Shan, Emily Wenger, Jiayun Zhang, Huiying Li, Haitao Zheng, Ben
Y. Zhao
- Abstract summary: Fawkes is a system that helps individuals inoculate their images against unauthorized facial recognition models.
We experimentally demonstrate that Fawkes provides 95+% protection against user recognition.
We achieve 100% success in experiments against today's state-of-the-art facial recognition services.
- Score: 34.04323550970413
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Today's proliferation of powerful facial recognition systems poses a real
threat to personal privacy. As Clearview.ai demonstrated, anyone can canvas the
Internet for data and train highly accurate facial recognition models of
individuals without their knowledge. We need tools to protect ourselves from
potential misuses of unauthorized facial recognition systems. Unfortunately, no
practical or effective solutions exist.
In this paper, we propose Fawkes, a system that helps individuals inoculate
their images against unauthorized facial recognition models. Fawkes achieves
this by helping users add imperceptible pixel-level changes (we call them
"cloaks") to their own photos before releasing them. When used to train facial
recognition models, these "cloaked" images produce functional models that
consistently cause normal images of the user to be misidentified. We
experimentally demonstrate that Fawkes provides 95+% protection against user
recognition regardless of how trackers train their models. Even when clean,
uncloaked images are "leaked" to the tracker and used for training, Fawkes can
still maintain an 80+% protection success rate. We achieve 100% success in
experiments against today's state-of-the-art facial recognition services.
Finally, we show that Fawkes is robust against a variety of countermeasures
that try to detect or disrupt image cloaks.
Related papers
- PuFace: Defending against Facial Cloaking Attacks for Facial Recognition Models [1.455585466338228]
Recently proposed facial cloaking attacks add invisible perturbation (cloaks) to facial images to protect users from being recognized by unauthorized facial recognition models.
This paper introduces PuFace, an image purification system leveraging the generalization ability of neural networks to diminish the impact of cloaks.
Our empirical experiment shows PuFace can effectively defend against two state-of-the-art facial cloaking attacks and reduces the attack success rate from 69.84% to 7.61% on average.
arXiv Detail & Related papers (2024-06-04T12:19:09Z) - Privacy-preserving Optics for Enhancing Protection in Face De-identification [60.110274007388135]
We propose a hardware-level face de-identification method to solve this vulnerability.
We also propose an anonymization framework that generates a new face using the privacy-preserving image, face heatmap, and a reference face image from a public dataset as input.
arXiv Detail & Related papers (2024-03-31T19:28:04Z) - Diff-Privacy: Diffusion-based Face Privacy Protection [58.1021066224765]
In this paper, we propose a novel face privacy protection method based on diffusion models, dubbed Diff-Privacy.
Specifically, we train our proposed multi-scale image inversion module (MSI) to obtain a set of SDM format conditional embeddings of the original image.
Based on the conditional embeddings, we design corresponding embedding scheduling strategies and construct different energy functions during the denoising process to achieve anonymization and visual identity information hiding.
arXiv Detail & Related papers (2023-09-11T09:26:07Z) - DiffProtect: Generate Adversarial Examples with Diffusion Models for
Facial Privacy Protection [64.77548539959501]
DiffProtect produces more natural-looking encrypted images than state-of-the-art methods.
It achieves significantly higher attack success rates, e.g., 24.5% and 25.1% absolute improvements on the CelebA-HQ and FFHQ datasets.
arXiv Detail & Related papers (2023-05-23T02:45:49Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - Does a Face Mask Protect my Privacy?: Deep Learning to Predict Protected
Attributes from Masked Face Images [0.6562256987706128]
We train and apply a CNN based on the ResNet-50 architecture with 20,003 synthetic masked images.
We show that there is no significant difference to privacy invasiveness when a mask is worn.
Our proposed approach can serve as a baseline utility to evaluate the privacy-invasiveness of artificial intelligence systems.
arXiv Detail & Related papers (2021-12-15T04:46:19Z) - Data Poisoning Won't Save You From Facial Recognition [1.14219428942199]
Data poisoning has been proposed as a compelling defense against facial recognition models trained on Web-scraped pictures.
We demonstrate that this strategy provides a false sense of security.
We evaluate two systems for poisoning attacks against large-scale facial recognition.
arXiv Detail & Related papers (2021-06-28T17:06:19Z) - Oriole: Thwarting Privacy against Trustworthy Deep Learning Models [16.224149190291048]
We present Oriole, a system that combines the advantages of data poisoning attacks and evasion attacks.
Our proposed Oriole system is able to effectively interfere with the performance of the Fawkes system to achieve promising attacking results.
arXiv Detail & Related papers (2021-02-23T05:33:55Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.