Toward Face Biometric De-identification using Adversarial Examples
- URL: http://arxiv.org/abs/2302.03657v1
- Date: Tue, 7 Feb 2023 18:17:41 GMT
- Title: Toward Face Biometric De-identification using Adversarial Examples
- Authors: Mahdi Ghafourian, Julian Fierrez, Luis Felipe Gomez, Ruben
Vera-Rodriguez, Aythami Morales, Zohra Rezgui, Raymond Veldhuis
- Abstract summary: Face recognition has endangered the privacy of internet users particularly in social media.
In this paper, we assess the effectiveness of using two widely known adversarial methods for de-identifying personal images.
- Score: 12.990538405629453
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The remarkable success of face recognition (FR) has endangered the privacy of
internet users particularly in social media. Recently, researchers turned to
use adversarial examples as a countermeasure. In this paper, we assess the
effectiveness of using two widely known adversarial methods (BIM and ILLC) for
de-identifying personal images. We discovered, unlike previous claims in the
literature, that it is not easy to get a high protection success rate
(suppressing identification rate) with imperceptible adversarial perturbation
to the human visual system. Finally, we found out that the transferability of
adversarial examples is highly affected by the training parameters of the
network with which they are generated.
Related papers
- Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely
Limited Queries [2.8532545355403123]
Recent successful adversarial attacks on face recognition show that, despite the remarkable progress of face recognition models, they are still far behind the human intelligence for perception and recognition.
In this paper, we propose automatic face warping which needs extremely limited number of queries to fool the target model.
We evaluate the robustness of proposed method in the decision-based black-box attack setting.
arXiv Detail & Related papers (2022-07-04T00:22:45Z) - Using a GAN to Generate Adversarial Examples to Facial Image Recognition [2.18624447693809]
Adversarial examples can be created for recognition systems based on deep neural networks.
In this work we use a Generative Adversarial Network (GAN) to create adversarial examples to deceive facial recognition.
Our results show knowledge distillation can be employed to drastically reduce the size of the resulting model.
arXiv Detail & Related papers (2021-11-30T08:50:11Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - Examining the Human Perceptibility of Black-Box Adversarial Attacks on
Face Recognition [14.557859576234621]
Adversarial attacks are a promising way to grant users privacy from Face Recognition systems.
We show how the $ell$ norm and other metrics do not correlate with human perceptibility in a linear fashion.
arXiv Detail & Related papers (2021-07-19T19:45:44Z) - Adversarial Examples Detection beyond Image Space [88.7651422751216]
We find that there exists compliance between perturbations and prediction confidence, which guides us to detect few-perturbation attacks from the aspect of prediction confidence.
We propose a method beyond image space by a two-stream architecture, in which the image stream focuses on the pixel artifacts and the gradient stream copes with the confidence artifacts.
arXiv Detail & Related papers (2021-02-23T09:55:03Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.