Ulixes: Facial Recognition Privacy with Adversarial Machine Learning
- URL: http://arxiv.org/abs/2010.10242v2
- Date: Tue, 1 Feb 2022 18:10:16 GMT
- Title: Ulixes: Facial Recognition Privacy with Adversarial Machine Learning
- Authors: Thomas Cilloni, Wei Wang, Charles Walter, Charles Fleming
- Abstract summary: We propose Ulixes, a strategy to generate visually non-invasive facial noise masks that yield adversarial examples.
This is applicable even when a user is unmasked and labeled images are available online.
- Score: 5.665130648960062
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial recognition tools are becoming exceptionally accurate in identifying
people from images. However, this comes at the cost of privacy for users of
online services with photo management (e.g. social media platforms).
Particularly troubling is the ability to leverage unsupervised learning to
recognize faces even when the user has not labeled their images. In this paper
we propose Ulixes, a strategy to generate visually non-invasive facial noise
masks that yield adversarial examples, preventing the formation of identifiable
user clusters in the embedding space of facial encoders. This is applicable
even when a user is unmasked and labeled images are available online. We
demonstrate the effectiveness of Ulixes by showing that various classification
and clustering methods cannot reliably label the adversarial examples we
generate. We also study the effects of Ulixes in various black-box settings and
compare it to the current state of the art in adversarial machine learning.
Finally, we challenge the effectiveness of Ulixes against adversarially trained
models and show that it is robust to countermeasures.
Related papers
- ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - FACE-AUDITOR: Data Auditing in Facial Recognition Systems [24.082527732931677]
Few-shot-based facial recognition systems have gained increasing attention due to their scalability and ability to work with a few face images.
To prevent the face images from being misused, one straightforward approach is to modify the raw face images before sharing them.
We propose a complete toolkit FACE-AUDITOR that can query the few-shot-based facial recognition model and determine whether any of a user's face images is used in training the model.
arXiv Detail & Related papers (2023-04-05T23:03:54Z) - OPOM: Customized Invisible Cloak towards Face Privacy Protection [58.07786010689529]
We investigate the face privacy protection from a technology standpoint based on a new type of customized cloak.
We propose a new method, named one person one mask (OPOM), to generate person-specific (class-wise) universal masks.
The effectiveness of the proposed method is evaluated on both common and celebrity datasets.
arXiv Detail & Related papers (2022-05-24T11:29:37Z) - An Ensemble Model for Face Liveness Detection [2.322052136673525]
We present a passive method to detect face presentation attack using an ensemble deep learning technique.
We propose an ensemble method where multiple features of the face and background regions are learned to predict whether the user is a bonafide or an attacker.
arXiv Detail & Related papers (2022-01-19T12:43:39Z) - Using a GAN to Generate Adversarial Examples to Facial Image Recognition [2.18624447693809]
Adversarial examples can be created for recognition systems based on deep neural networks.
In this work we use a Generative Adversarial Network (GAN) to create adversarial examples to deceive facial recognition.
Our results show knowledge distillation can be employed to drastically reduce the size of the resulting model.
arXiv Detail & Related papers (2021-11-30T08:50:11Z) - Anti-Neuron Watermarking: Protecting Personal Data Against Unauthorized
Neural Model Training [50.308254937851814]
Personal data (e.g. images) could be exploited inappropriately to train deep neural network models without authorization.
By embedding a watermarking signature using specialized linear color transformation to user images, neural models will be imprinted with such a signature.
This is the first work to protect users' personal data from unauthorized usage in neural network training.
arXiv Detail & Related papers (2021-09-18T22:10:37Z) - A Study of Face Obfuscation in ImageNet [94.2949777826947]
In this paper, we explore image obfuscation in the ImageNet challenge.
Most categories in the ImageNet challenge are not people categories; nevertheless, many incidental people are in the images.
We benchmark various deep neural networks on face-blurred images and observe a disparate impact on different categories.
Results show that features learned on face-blurred images are equally transferable.
arXiv Detail & Related papers (2021-03-10T17:11:34Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.