Imperceptible Physical Attack against Face Recognition Systems via LED
Illumination Modulation
- URL: http://arxiv.org/abs/2307.13294v2
- Date: Mon, 7 Aug 2023 08:12:57 GMT
- Title: Imperceptible Physical Attack against Face Recognition Systems via LED
Illumination Modulation
- Authors: Junbin Fang, Canjian Jiang, You Jiang, Puxi Lin, Zhaojie Chen, Yujing
Sun, Siu-Ming Yiu, Zoe L. Jiang
- Abstract summary: We present a denial-of-service (DoS) attack for face detection and a dodging attack for face verification.
The success rates of DoS attacks against face detection models reach 97.67%, 100%, and 100%, respectively, and the success rates of dodging attacks against all face verification models reach 100%.
- Score: 3.6939170447261835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although face recognition starts to play an important role in our daily life,
we need to pay attention that data-driven face recognition vision systems are
vulnerable to adversarial attacks. However, the current two categories of
adversarial attacks, namely digital attacks and physical attacks both have
drawbacks, with the former ones impractical and the latter one conspicuous,
high-computational and inexecutable. To address the issues, we propose a
practical, executable, inconspicuous and low computational adversarial attack
based on LED illumination modulation. To fool the systems, the proposed attack
generates imperceptible luminance changes to human eyes through fast intensity
modulation of scene LED illumination and uses the rolling shutter effect of
CMOS image sensors in face recognition systems to implant luminance information
perturbation to the captured face images. In summary,we present a
denial-of-service (DoS) attack for face detection and a dodging attack for face
verification. We also evaluate their effectiveness against well-known face
detection models, Dlib, MTCNN and RetinaFace , and face verification models,
Dlib, FaceNet,and ArcFace.The extensive experiments show that the success rates
of DoS attacks against face detection models reach 97.67%, 100%, and 100%,
respectively, and the success rates of dodging attacks against all face
verification models reach 100%.
Related papers
- Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems [13.830575255066773]
Face recognition pipelines have been widely deployed in mission-critical systems in trust, equitable and responsible AI applications.
The emergence of adversarial attacks has threatened the security of the entire recognition pipeline.
We propose an effective yet easy-to-launch physical adversarial attack, named AdvColor, against black-box face recognition pipelines.
arXiv Detail & Related papers (2024-07-11T13:58:09Z) - Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Is Face Recognition Safe from Realizable Attacks? [1.7214499647717132]
Face recognition is a popular form of biometric authentication and due to its widespread use, attacks have become more common as well.
Recent studies show that Face Recognition Systems are vulnerable to attacks and can lead to erroneous identification of faces.
We propose an attack scheme where the attacker can generate realistic synthesized face images with subtle perturbations and physically realize that onto his face to attack black-box face recognition systems.
arXiv Detail & Related papers (2022-10-15T03:52:53Z) - RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition
using a Mobile and Compact Printer [10.245536402327096]
We propose a new method to attack face recognition models or systems called RSTAM.
RSTAM enables an effective black-box impersonation attack using an adversarial mask printed by a mobile and compact printer.
The performance of the attacks is also evaluated on state-of-the-art commercial face recognition systems: Face++, Baidu, Aliyun, Tencent, and Microsoft.
arXiv Detail & Related papers (2022-06-25T08:16:55Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - FACESEC: A Fine-grained Robustness Evaluation Framework for Face
Recognition Systems [49.577302852655144]
FACESEC is a framework for fine-grained robustness evaluation of face recognition systems.
We study five face recognition systems in both closed-set and open-set settings.
We find that accurate knowledge of neural architecture is significantly more important than knowledge of the training data in black-box attacks.
arXiv Detail & Related papers (2021-04-08T23:00:25Z) - Robust Attacks on Deep Learning Face Recognition in the Physical World [48.909604306342544]
FaceAdv is a physical-world attack that crafts adversarial stickers to deceive FR systems.
It mainly consists of a sticker generator and a transformer, where the former can craft several stickers with different shapes.
We conduct extensive experiments to evaluate the effectiveness of FaceAdv on attacking 3 typical FR systems.
arXiv Detail & Related papers (2020-11-27T02:24:43Z) - Adversarial Light Projection Attacks on Face Recognition Systems: A
Feasibility Study [21.42041262836322]
We investigate the feasibility of conducting real-time physical attacks on face recognition systems using adversarial light projections.
The adversary generates a digital adversarial pattern using one or more images of the target available to the adversary.
The digital adversarial pattern is then projected onto the adversary's face in the physical domain to either impersonate a target (impersonation) or evade recognition (obfuscation)
arXiv Detail & Related papers (2020-03-24T23:06:25Z) - On the Robustness of Face Recognition Algorithms Against Attacks and
Bias [78.68458616687634]
Face recognition algorithms have demonstrated very high recognition performance, suggesting suitability for real world applications.
Despite the enhanced accuracies, robustness of these algorithms against attacks and bias has been challenged.
This paper summarizes different ways in which the robustness of a face recognition algorithm is challenged.
arXiv Detail & Related papers (2020-02-07T18:21:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.