Dodging Attack Using Carefully Crafted Natural Makeup
- URL: http://arxiv.org/abs/2109.06467v1
- Date: Tue, 14 Sep 2021 06:27:14 GMT
- Title: Dodging Attack Using Carefully Crafted Natural Makeup
- Authors: Nitzan Guetta and Asaf Shabtai and Inderjeet Singh and Satoru Momiyama
and Yuval Elovici
- Abstract summary: We present a novel black-box adversarial machine learning (AML) attack which crafts natural makeup on a human participant.
We evaluate our proposed attack against the ArcFace face recognition model, with 20 participants in a real-world setup.
In the digital domain, the face recognition system was unable to identify all of the participants, while in the physical domain, the face recognition system was able to identify the participants in only 1.22% of the frames.
- Score: 42.65417043860506
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning face recognition models are used by state-of-the-art
surveillance systems to identify individuals passing through public areas
(e.g., airports). Previous studies have demonstrated the use of adversarial
machine learning (AML) attacks to successfully evade identification by such
systems, both in the digital and physical domains. Attacks in the physical
domain, however, require significant manipulation to the human participant's
face, which can raise suspicion by human observers (e.g. airport security
officers). In this study, we present a novel black-box AML attack which
carefully crafts natural makeup, which, when applied on a human participant,
prevents the participant from being identified by facial recognition models. We
evaluated our proposed attack against the ArcFace face recognition model, with
20 participants in a real-world setup that includes two cameras, different
shooting angles, and different lighting conditions. The evaluation results show
that in the digital domain, the face recognition system was unable to identify
all of the participants, while in the physical domain, the face recognition
system was able to identify the participants in only 1.22% of the frames
(compared to 47.57% without makeup and 33.73% with random natural makeup),
which is below a reasonable threshold of a realistic operational environment.
Related papers
- Is Face Recognition Safe from Realizable Attacks? [1.7214499647717132]
Face recognition is a popular form of biometric authentication and due to its widespread use, attacks have become more common as well.
Recent studies show that Face Recognition Systems are vulnerable to attacks and can lead to erroneous identification of faces.
We propose an attack scheme where the attacker can generate realistic synthesized face images with subtle perturbations and physically realize that onto his face to attack black-box face recognition systems.
arXiv Detail & Related papers (2022-10-15T03:52:53Z) - RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely
Limited Queries [2.8532545355403123]
Recent successful adversarial attacks on face recognition show that, despite the remarkable progress of face recognition models, they are still far behind the human intelligence for perception and recognition.
In this paper, we propose automatic face warping which needs extremely limited number of queries to fool the target model.
We evaluate the robustness of proposed method in the decision-based black-box attack setting.
arXiv Detail & Related papers (2022-07-04T00:22:45Z) - Controllable Evaluation and Generation of Physical Adversarial Patch on
Face Recognition [49.42127182149948]
Recent studies have revealed the vulnerability of face recognition models against physical adversarial patches.
We propose to simulate the complex transformations of faces in the physical world via 3D-face modeling.
We further propose a Face3DAdv method considering the 3D face transformations and realistic physical variations.
arXiv Detail & Related papers (2022-03-09T10:21:40Z) - Adversarial Mask: Real-World Adversarial Attack Against Face Recognition
Models [66.07662074148142]
We propose a physical adversarial universal perturbation (UAP) against state-of-the-art deep learning-based facial recognition models.
In our experiments, we examined the transferability of our adversarial mask to a wide range of deep learning models and datasets.
We validated our adversarial mask effectiveness in real-world experiments by printing the adversarial pattern on a fabric medical face mask.
arXiv Detail & Related papers (2021-11-21T08:13:21Z) - FACESEC: A Fine-grained Robustness Evaluation Framework for Face
Recognition Systems [49.577302852655144]
FACESEC is a framework for fine-grained robustness evaluation of face recognition systems.
We study five face recognition systems in both closed-set and open-set settings.
We find that accurate knowledge of neural architecture is significantly more important than knowledge of the training data in black-box attacks.
arXiv Detail & Related papers (2021-04-08T23:00:25Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - Adversarial Light Projection Attacks on Face Recognition Systems: A
Feasibility Study [21.42041262836322]
We investigate the feasibility of conducting real-time physical attacks on face recognition systems using adversarial light projections.
The adversary generates a digital adversarial pattern using one or more images of the target available to the adversary.
The digital adversarial pattern is then projected onto the adversary's face in the physical domain to either impersonate a target (impersonation) or evade recognition (obfuscation)
arXiv Detail & Related papers (2020-03-24T23:06:25Z) - Investigating the Impact of Inclusion in Face Recognition Training Data
on Individual Face Identification [93.5538147928669]
We audit ArcFace, a state-of-the-art, open source face recognition system, in a large-scale face identification experiment with more than one million distractor images.
We find a Rank-1 face identification accuracy of 79.71% for individuals present in the model's training data and an accuracy of 75.73% for those not present.
arXiv Detail & Related papers (2020-01-09T15:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.