Is Face Recognition Safe from Realizable Attacks?
- URL: http://arxiv.org/abs/2210.08178v1
- Date: Sat, 15 Oct 2022 03:52:53 GMT
- Title: Is Face Recognition Safe from Realizable Attacks?
- Authors: Sanjay Saha and Terence Sim
- Abstract summary: Face recognition is a popular form of biometric authentication and due to its widespread use, attacks have become more common as well.
Recent studies show that Face Recognition Systems are vulnerable to attacks and can lead to erroneous identification of faces.
We propose an attack scheme where the attacker can generate realistic synthesized face images with subtle perturbations and physically realize that onto his face to attack black-box face recognition systems.
- Score: 1.7214499647717132
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Face recognition is a popular form of biometric authentication and due to its
widespread use, attacks have become more common as well. Recent studies show
that Face Recognition Systems are vulnerable to attacks and can lead to
erroneous identification of faces. Interestingly, most of these attacks are
white-box, or they are manipulating facial images in ways that are not
physically realizable. In this paper, we propose an attack scheme where the
attacker can generate realistic synthesized face images with subtle
perturbations and physically realize that onto his face to attack black-box
face recognition systems. Comprehensive experiments and analyses show that
subtle perturbations realized on attackers face can create successful attacks
on state-of-the-art face recognition systems in black-box settings. Our study
exposes the underlying vulnerability posed by the Face Recognition Systems
against realizable black-box attacks.
Related papers
- Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems [13.830575255066773]
Face recognition pipelines have been widely deployed in mission-critical systems in trust, equitable and responsible AI applications.
The emergence of adversarial attacks has threatened the security of the entire recognition pipeline.
We propose an effective yet easy-to-launch physical adversarial attack, named AdvColor, against black-box face recognition pipelines.
arXiv Detail & Related papers (2024-07-11T13:58:09Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Imperceptible Physical Attack against Face Recognition Systems via LED
Illumination Modulation [3.6939170447261835]
We present a denial-of-service (DoS) attack for face detection and a dodging attack for face verification.
The success rates of DoS attacks against face detection models reach 97.67%, 100%, and 100%, respectively, and the success rates of dodging attacks against all face verification models reach 100%.
arXiv Detail & Related papers (2023-07-25T07:20:21Z) - Digital and Physical Face Attacks: Reviewing and One Step Further [31.780516471483985]
Face presentation attacks (FPA) have raised pressing mistrust concerns.
Besides physical face attacks, face videos/images are vulnerable to a wide variety of digital attack techniques launched by malicious hackers.
This survey aims to build the integrity of face forensics by providing thorough analyses of existing literature and highlighting the issues requiring further attention.
arXiv Detail & Related papers (2022-09-29T11:25:52Z) - RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition
using a Mobile and Compact Printer [10.245536402327096]
We propose a new method to attack face recognition models or systems called RSTAM.
RSTAM enables an effective black-box impersonation attack using an adversarial mask printed by a mobile and compact printer.
The performance of the attacks is also evaluated on state-of-the-art commercial face recognition systems: Face++, Baidu, Aliyun, Tencent, and Microsoft.
arXiv Detail & Related papers (2022-06-25T08:16:55Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - FACESEC: A Fine-grained Robustness Evaluation Framework for Face
Recognition Systems [49.577302852655144]
FACESEC is a framework for fine-grained robustness evaluation of face recognition systems.
We study five face recognition systems in both closed-set and open-set settings.
We find that accurate knowledge of neural architecture is significantly more important than knowledge of the training data in black-box attacks.
arXiv Detail & Related papers (2021-04-08T23:00:25Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - Adversarial Light Projection Attacks on Face Recognition Systems: A
Feasibility Study [21.42041262836322]
We investigate the feasibility of conducting real-time physical attacks on face recognition systems using adversarial light projections.
The adversary generates a digital adversarial pattern using one or more images of the target available to the adversary.
The digital adversarial pattern is then projected onto the adversary's face in the physical domain to either impersonate a target (impersonation) or evade recognition (obfuscation)
arXiv Detail & Related papers (2020-03-24T23:06:25Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.