Adversarial Light Projection Attacks on Face Recognition Systems: A
Feasibility Study
- URL: http://arxiv.org/abs/2003.11145v2
- Date: Fri, 17 Apr 2020 00:44:07 GMT
- Title: Adversarial Light Projection Attacks on Face Recognition Systems: A
Feasibility Study
- Authors: Dinh-Luan Nguyen and Sunpreet S. Arora and Yuhang Wu and Hao Yang
- Abstract summary: We investigate the feasibility of conducting real-time physical attacks on face recognition systems using adversarial light projections.
The adversary generates a digital adversarial pattern using one or more images of the target available to the adversary.
The digital adversarial pattern is then projected onto the adversary's face in the physical domain to either impersonate a target (impersonation) or evade recognition (obfuscation)
- Score: 21.42041262836322
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based systems have been shown to be vulnerable to adversarial
attacks in both digital and physical domains. While feasible, digital attacks
have limited applicability in attacking deployed systems, including face
recognition systems, where an adversary typically has access to the input and
not the transmission channel. In such setting, physical attacks that directly
provide a malicious input through the input channel pose a bigger threat. We
investigate the feasibility of conducting real-time physical attacks on face
recognition systems using adversarial light projections. A setup comprising a
commercially available web camera and a projector is used to conduct the
attack. The adversary uses a transformation-invariant adversarial pattern
generation method to generate a digital adversarial pattern using one or more
images of the target available to the adversary. The digital adversarial
pattern is then projected onto the adversary's face in the physical domain to
either impersonate a target (impersonation) or evade recognition (obfuscation).
We conduct preliminary experiments using two open-source and one commercial
face recognition system on a pool of 50 subjects. Our experimental results
demonstrate the vulnerability of face recognition systems to light projection
attacks in both white-box and black-box attack settings.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Rethinking the Threat and Accessibility of Adversarial Attacks against Face Recognition Systems [13.830575255066773]
Face recognition pipelines have been widely deployed in mission-critical systems in trust, equitable and responsible AI applications.
The emergence of adversarial attacks has threatened the security of the entire recognition pipeline.
We propose an effective yet easy-to-launch physical adversarial attack, named AdvColor, against black-box face recognition pipelines.
arXiv Detail & Related papers (2024-07-11T13:58:09Z) - Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Imperceptible Physical Attack against Face Recognition Systems via LED
Illumination Modulation [3.6939170447261835]
We present a denial-of-service (DoS) attack for face detection and a dodging attack for face verification.
The success rates of DoS attacks against face detection models reach 97.67%, 100%, and 100%, respectively, and the success rates of dodging attacks against all face verification models reach 100%.
arXiv Detail & Related papers (2023-07-25T07:20:21Z) - State-of-the-art optical-based physical adversarial attacks for deep
learning computer vision systems [3.3470481105928216]
Adversarial attacks can mislead deep learning models to make false predictions by implanting small perturbations to the original input that are imperceptible to the human eye.
Physical adversarial attacks, which is more realistic, as the perturbation is introduced to the input before it is being captured and converted to a binary image.
This paper focuses on optical-based physical adversarial attack techniques for computer vision systems.
arXiv Detail & Related papers (2023-03-22T01:14:52Z) - Face Presentation Attack Detection [59.05779913403134]
Face recognition technology has been widely used in daily interactive applications such as checking-in and mobile payment.
However, its vulnerability to presentation attacks (PAs) limits its reliable use in ultra-secure applicational scenarios.
arXiv Detail & Related papers (2022-12-07T14:51:17Z) - Is Face Recognition Safe from Realizable Attacks? [1.7214499647717132]
Face recognition is a popular form of biometric authentication and due to its widespread use, attacks have become more common as well.
Recent studies show that Face Recognition Systems are vulnerable to attacks and can lead to erroneous identification of faces.
We propose an attack scheme where the attacker can generate realistic synthesized face images with subtle perturbations and physically realize that onto his face to attack black-box face recognition systems.
arXiv Detail & Related papers (2022-10-15T03:52:53Z) - RSTAM: An Effective Black-Box Impersonation Attack on Face Recognition
using a Mobile and Compact Printer [10.245536402327096]
We propose a new method to attack face recognition models or systems called RSTAM.
RSTAM enables an effective black-box impersonation attack using an adversarial mask printed by a mobile and compact printer.
The performance of the attacks is also evaluated on state-of-the-art commercial face recognition systems: Face++, Baidu, Aliyun, Tencent, and Microsoft.
arXiv Detail & Related papers (2022-06-25T08:16:55Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.