Backdoor Poisoning Attack Against Face Spoofing Attack Detection Methods
- URL: http://arxiv.org/abs/2509.03108v2
- Date: Fri, 12 Sep 2025 10:53:43 GMT
- Title: Backdoor Poisoning Attack Against Face Spoofing Attack Detection Methods
- Authors: Shota Iwamatsu, Koichi Ito, Takafumi Aoki,
- Abstract summary: Face recognition systems are vulnerable to illegal authentication attempts using user face photos, such as spoofing attacks.<n>To prevent such spoofing attacks, it is crucial to discriminate whether the input image is a live user image or a spoofed image.<n>We propose a novel backdoor poisoning attack method to demonstrate the latent threat of backdoor poisoning within face anti-sfing detection.
- Score: 1.529342790344802
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face recognition systems are robust against environmental changes and noise, and thus may be vulnerable to illegal authentication attempts using user face photos, such as spoofing attacks. To prevent such spoofing attacks, it is crucial to discriminate whether the input image is a live user image or a spoofed image prior to the face recognition process. Most existing spoofing attack detection methods utilize deep learning, which necessitates a substantial amount of training data. Consequently, if malicious data is injected into a portion of the training dataset, a specific spoofing attack may be erroneously classified as live, leading to false positives. In this paper, we propose a novel backdoor poisoning attack method to demonstrate the latent threat of backdoor poisoning within face anti-spoofing detection. The proposed method enables certain spoofing attacks to bypass detection by embedding features extracted from the spoofing attack's face image into a live face image without inducing any perceptible visual alterations. Through experiments conducted on public datasets, we demonstrate that the proposed method constitutes a realistic threat to existing spoofing attack detection systems.
Related papers
- Leveraging Intermediate Features of Vision Transformer for Face Anti-Spoofing [0.11184789007828977]
We propose a spoofing attack detection method based on Vision Transformer (ViT) to detect minute differences between live and spoofed face images.<n>The proposed method also introduces two data augmentation methods: face anti-sfing data augmentation and patch-wise data augmentation.<n>We demonstrate the effectiveness of the proposed method through experiments using the OULU-NPU and SiW datasets.
arXiv Detail & Related papers (2025-05-30T09:33:01Z) - Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - Swap It Like Its Hot: Segmentation-based spoof attacks on eye-tracking images [1.4732811715354455]
Biometric authentication is susceptible to spoofing through physical or digital manipulation.
Liveness detection classifies gaze data as real or fake, which is sufficient to detect physical presentation attacks.
We propose IrisSwap as a novel attack on gaze-based liveness detection.
arXiv Detail & Related papers (2024-04-22T01:59:48Z) - Poisoned Forgery Face: Towards Backdoor Attacks on Face Forgery
Detection [62.595450266262645]
This paper introduces a novel and previously unrecognized threat in face forgery detection scenarios caused by backdoor attack.
By embedding backdoors into models, attackers can deceive detectors into producing erroneous predictions for forged faces.
We propose emphPoisoned Forgery Face framework, which enables clean-label backdoor attacks on face forgery detectors.
arXiv Detail & Related papers (2024-02-18T06:31:05Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Information-containing Adversarial Perturbation for Combating Facial
Manipulation Systems [19.259372985094235]
Malicious applications of deep learning systems pose a serious threat to individuals' privacy and reputation.
We propose a novel two-tier protection method named Information-containing Adversarial Perturbation (IAP)
We use an encoder to map a facial image and its identity message to a cross-model adversarial example which can disrupt multiple facial manipulation systems.
arXiv Detail & Related papers (2023-03-21T06:48:14Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - An Ensemble Model for Face Liveness Detection [2.322052136673525]
We present a passive method to detect face presentation attack using an ensemble deep learning technique.
We propose an ensemble method where multiple features of the face and background regions are learned to predict whether the user is a bonafide or an attacker.
arXiv Detail & Related papers (2022-01-19T12:43:39Z) - Perception Matters: Exploring Imperceptible and Transferable
Anti-forensics for GAN-generated Fake Face Imagery Detection [28.620523463372177]
generative adversarial networks (GANs) can generate photo-realistic fake facial images which are perceptually indistinguishable from real face photos.
Here we explore more textitimperceptible and textittransferable anti-forensic for fake face imagery detection based on adversarial attacks.
We propose a novel adversarial attack method, better suitable for image anti-forensics, in the transformed color domain by considering visual perception.
arXiv Detail & Related papers (2020-10-29T18:54:06Z) - Face Anti-Spoofing by Learning Polarization Cues in a Real-World
Scenario [50.36920272392624]
Face anti-spoofing is the key to preventing security breaches in biometric recognition applications.
Deep learning method using RGB and infrared images demands a large amount of training data for new attacks.
We present a face anti-spoofing method in a real-world scenario by automatic learning the physical characteristics in polarization images of a real face.
arXiv Detail & Related papers (2020-03-18T03:04:03Z) - Towards Face Encryption by Generating Adversarial Identity Masks [53.82211571716117]
We propose a targeted identity-protection iterative method (TIP-IM) to generate adversarial identity masks.
TIP-IM provides 95%+ protection success rate against various state-of-the-art face recognition models.
arXiv Detail & Related papers (2020-03-15T12:45:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.