Using a GAN to Generate Adversarial Examples to Facial Image Recognition
- URL: http://arxiv.org/abs/2111.15213v1
- Date: Tue, 30 Nov 2021 08:50:11 GMT
- Title: Using a GAN to Generate Adversarial Examples to Facial Image Recognition
- Authors: Andrew Merrigan and Alan F. Smeaton
- Abstract summary: Adversarial examples can be created for recognition systems based on deep neural networks.
In this work we use a Generative Adversarial Network (GAN) to create adversarial examples to deceive facial recognition.
Our results show knowledge distillation can be employed to drastically reduce the size of the resulting model.
- Score: 2.18624447693809
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Images posted online present a privacy concern in that they may be used as
reference examples for a facial recognition system. Such abuse of images is in
violation of privacy rights but is difficult to counter. It is well established
that adversarial example images can be created for recognition systems which
are based on deep neural networks. These adversarial examples can be used to
disrupt the utility of the images as reference examples or training data. In
this work we use a Generative Adversarial Network (GAN) to create adversarial
examples to deceive facial recognition and we achieve an acceptable success
rate in fooling the face recognition. Our results reduce the training time for
the GAN by removing the discriminator component. Furthermore, our results show
knowledge distillation can be employed to drastically reduce the size of the
resulting model without impacting performance indicating that our contribution
could run comfortably on a smartphone
Related papers
- ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a potential threat to civil rights.
To prevent this fraud at its source, proactive defense technology was proposed to disrupt the manipulation process.
We propose a novel universal framework for combating facial manipulation, called ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - Privacy-Preserving Face Recognition Using Trainable Feature Subtraction [40.47645421424354]
Face recognition has led to increasing privacy concerns.
This paper explores face image protection against viewing and recovery attacks.
We distill our methodologies into a novel privacy-preserving face recognition method, MinusFace.
arXiv Detail & Related papers (2024-03-19T05:27:52Z) - Privacy-Preserving Face Recognition Using Random Frequency Components [46.95003101593304]
Face recognition has sparked increasing privacy concerns.
We propose to conceal visual information by pruning human-perceivable low-frequency components.
We distill our findings into a novel privacy-preserving face recognition method, PartialFace.
arXiv Detail & Related papers (2023-08-21T04:31:02Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - FACE-AUDITOR: Data Auditing in Facial Recognition Systems [24.082527732931677]
Few-shot-based facial recognition systems have gained increasing attention due to their scalability and ability to work with a few face images.
To prevent the face images from being misused, one straightforward approach is to modify the raw face images before sharing them.
We propose a complete toolkit FACE-AUDITOR that can query the few-shot-based facial recognition model and determine whether any of a user's face images is used in training the model.
arXiv Detail & Related papers (2023-04-05T23:03:54Z) - Toward Face Biometric De-identification using Adversarial Examples [12.990538405629453]
Face recognition has endangered the privacy of internet users particularly in social media.
In this paper, we assess the effectiveness of using two widely known adversarial methods for de-identifying personal images.
arXiv Detail & Related papers (2023-02-07T18:17:41Z) - Assessing Privacy Risks from Feature Vector Reconstruction Attacks [24.262351521060676]
We develop metrics that meaningfully capture the threat of reconstructed face images.
We show that reconstructed face images enable re-identification by both commercial facial recognition systems and humans.
Our results confirm that feature vectors should be recognized as Personal Identifiable Information.
arXiv Detail & Related papers (2022-02-11T16:52:02Z) - On the Effect of Selfie Beautification Filters on Face Detection and
Recognition [53.561797148529664]
Social media image filters modify the image contrast or illumination or occlude parts of the face with for example artificial glasses or animal noses.
We develop a method to reconstruct the applied manipulation with a modified version of the U-NET segmentation network.
From a recognition perspective, we employ distance measures and trained machine learning algorithms applied to features extracted using a ResNet-34 network trained to recognize faces.
arXiv Detail & Related papers (2021-10-17T22:10:56Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z) - Ulixes: Facial Recognition Privacy with Adversarial Machine Learning [5.665130648960062]
We propose Ulixes, a strategy to generate visually non-invasive facial noise masks that yield adversarial examples.
This is applicable even when a user is unmasked and labeled images are available online.
arXiv Detail & Related papers (2020-10-20T13:05:51Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.