Robust Physical-World Attacks on Face Recognition
- URL: http://arxiv.org/abs/2109.09320v1
- Date: Mon, 20 Sep 2021 06:49:52 GMT
- Title: Robust Physical-World Attacks on Face Recognition
- Authors: Xin Zheng, Yanbo Fan, Baoyuan Wu, Yong Zhang, Jue Wang, Shirui Pan
- Abstract summary: Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
- Score: 52.403564953848544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition has been greatly facilitated by the development of deep
neural networks (DNNs) and has been widely applied to many safety-critical
applications. However, recent studies have shown that DNNs are very vulnerable
to adversarial examples, raising serious concerns on the security of real-world
face recognition. In this work, we study sticker-based physical attacks on face
recognition for better understanding its adversarial robustness. To this end,
we first analyze in-depth the complicated physical-world conditions confronted
by attacking face recognition, including the different variations of stickers,
faces, and environmental conditions. Then, we propose a novel robust physical
attack framework, dubbed PadvFace, to model these challenging variations
specifically. Furthermore, considering the difference in attack complexity, we
propose an efficient Curriculum Adversarial Attack (CAA) algorithm that
gradually adapts adversarial stickers to environmental variations from easy to
complex. Finally, we construct a standardized testing protocol to facilitate
the fair evaluation of physical attacks on face recognition, and extensive
experiments on both dodging and impersonation attacks demonstrate the superior
performance of the proposed method.
Related papers
- Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - RAF: Recursive Adversarial Attacks on Face Recognition Using Extremely
Limited Queries [2.8532545355403123]
Recent successful adversarial attacks on face recognition show that, despite the remarkable progress of face recognition models, they are still far behind the human intelligence for perception and recognition.
In this paper, we propose automatic face warping which needs extremely limited number of queries to fool the target model.
We evaluate the robustness of proposed method in the decision-based black-box attack setting.
arXiv Detail & Related papers (2022-07-04T00:22:45Z) - Controllable Evaluation and Generation of Physical Adversarial Patch on
Face Recognition [49.42127182149948]
Recent studies have revealed the vulnerability of face recognition models against physical adversarial patches.
We propose to simulate the complex transformations of faces in the physical world via 3D-face modeling.
We further propose a Face3DAdv method considering the 3D face transformations and realistic physical variations.
arXiv Detail & Related papers (2022-03-09T10:21:40Z) - Vulnerability Analysis of Face Morphing Attacks from Landmarks and
Generative Adversarial Networks [0.8602553195689513]
This paper provides a new dataset with four different types of morphing attacks based on OpenCV, FaceMorpher, WebMorph and a generative adversarial network (StyleGAN)
We also conduct extensive experiments to assess the vulnerability of the state-of-the-art face recognition systems, notably FaceNet, VGG-Face, and ArcFace.
arXiv Detail & Related papers (2020-12-09T22:10:17Z) - Robust Attacks on Deep Learning Face Recognition in the Physical World [48.909604306342544]
FaceAdv is a physical-world attack that crafts adversarial stickers to deceive FR systems.
It mainly consists of a sticker generator and a transformer, where the former can craft several stickers with different shapes.
We conduct extensive experiments to evaluate the effectiveness of FaceAdv on attacking 3 typical FR systems.
arXiv Detail & Related papers (2020-11-27T02:24:43Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z) - On the Robustness of Face Recognition Algorithms Against Attacks and
Bias [78.68458616687634]
Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications.
Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged.
This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged.
arXiv Detail & Related papers (2020-02-07T18:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.