From Detection to Correction: Backdoor-Resilient Face Recognition via Vision-Language Trigger Detection and Noise-Based Neutralization
- URL: http://arxiv.org/abs/2508.05409v1
- Date: Thu, 07 Aug 2025 14:02:34 GMT
- Title: From Detection to Correction: Backdoor-Resilient Face Recognition via Vision-Language Trigger Detection and Noise-Based Neutralization
- Authors: Farah Wahida, M. A. P. Chamikara, Yashothara Shanmugarasa, Mohan Baruwal Chhetri, Thilina Ranbaduge, Ibrahim Khalil,
- Abstract summary: Backdoor attacks can subvert face recognition systems powered by deep neural networks (DNNs)<n>We propose TrueBiometric: Trustworthy Biometrics, which accurately detects poisoned images using a majority voting mechanism.<n>Our empirical results demonstrate that TrueBiometric detects and corrects poisoned images with 100% accuracy without compromising accuracy on clean images.
- Score: 2.661968537236039
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Biometric systems, such as face recognition systems powered by deep neural networks (DNNs), rely on large and highly sensitive datasets. Backdoor attacks can subvert these systems by manipulating the training process. By inserting a small trigger, such as a sticker, make-up, or patterned mask, into a few training images, an adversary can later present the same trigger during authentication to be falsely recognized as another individual, thereby gaining unauthorized access. Existing defense mechanisms against backdoor attacks still face challenges in precisely identifying and mitigating poisoned images without compromising data utility, which undermines the overall reliability of the system. We propose a novel and generalizable approach, TrueBiometric: Trustworthy Biometrics, which accurately detects poisoned images using a majority voting mechanism leveraging multiple state-of-the-art large vision language models. Once identified, poisoned samples are corrected using targeted and calibrated corrective noise. Our extensive empirical results demonstrate that TrueBiometric detects and corrects poisoned images with 100\% accuracy without compromising accuracy on clean images. Compared to existing state-of-the-art approaches, TrueBiometric offers a more practical, accurate, and effective solution for mitigating backdoor attacks in face recognition systems.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Embedding Non-Distortive Cancelable Face Template Generation [22.80706131626207]
We introduce an innovative image distortion technique that makes facial images unrecognizable to the eye but still identifiable by any custom embedding neural network model.
We test the reliability of biometric recognition networks by determining the maximum image distortion that does not change the predicted identity.
arXiv Detail & Related papers (2024-02-04T15:39:18Z) - Unrecognizable Yet Identifiable: Image Distortion with Preserved Embeddings [22.338328674283062]
We introduce an innovative image transformation technique that renders facial images unrecognizable to the eye while maintaining their identifiability by neural network models.
The proposed methodology can be used in various artificial intelligence applications to distort the visual data and keep the derived features close.
We show that it is possible to build the distortion that changes the image content by more than 70% while maintaining the same recognition accuracy.
arXiv Detail & Related papers (2024-01-26T18:20:53Z) - TetraLoss: Improving the Robustness of Face Recognition against Morphing Attacks [6.492755549391469]
Face recognition systems are widely deployed in high-security applications.<n>Digital manipulations, such as face morphing, pose a security threat to face recognition systems.<n>We present a novel method for adapting deep learning-based face recognition systems to be more robust against face morphing attacks.
arXiv Detail & Related papers (2024-01-21T21:04:05Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - MMNet: Multi-Collaboration and Multi-Supervision Network for Sequential
Deepfake Detection [81.59191603867586]
Sequential deepfake detection aims to identify forged facial regions with the correct sequence for recovery.
The recovery of forged images requires knowledge of the manipulation model to implement inverse transformations.
We propose Multi-Collaboration and Multi-Supervision Network (MMNet) that handles various spatial scales and sequential permutations in forged face images.
arXiv Detail & Related papers (2023-07-06T02:32:08Z) - Information-containing Adversarial Perturbation for Combating Facial
Manipulation Systems [19.259372985094235]
Malicious applications of deep learning systems pose a serious threat to individuals' privacy and reputation.
We propose a novel two-tier protection method named Information-containing Adversarial Perturbation (IAP)
We use an encoder to map a facial image and its identity message to a cross-model adversarial example which can disrupt multiple facial manipulation systems.
arXiv Detail & Related papers (2023-03-21T06:48:14Z) - Resurrecting Trust in Facial Recognition: Mitigating Backdoor Attacks in
Face Recognition to Prevent Potential Privacy Breaches [7.436067208838344]
Deep learning is widely utilized for face recognition (FR)
However, such models are vulnerable to backdoor attacks executed by malicious parties.
We propose BA-BAM: Biometric Authentication - Backdoor Attack Mitigation.
arXiv Detail & Related papers (2022-02-18T13:53:55Z) - Harnessing Unrecognizable Faces for Face Recognition [87.80037162457427]
We propose a measure of recognizability of a face image, implemented by a deep neural network trained using mostly recognizable identities.
We show that accounting for recognizability reduces error rate of single-image face recognition by 58% at FAR=1e-5.
arXiv Detail & Related papers (2021-06-08T05:25:03Z) - Aurora Guard: Reliable Face Anti-Spoofing via Mobile Lighting System [103.5604680001633]
Anti-spoofing against high-resolution rendering replay of paper photos or digital videos remains an open problem.
We propose a simple yet effective face anti-spoofing system, termed Aurora Guard (AG)
arXiv Detail & Related papers (2021-02-01T09:17:18Z) - Face Anti-Spoofing by Learning Polarization Cues in a Real-World
Scenario [50.36920272392624]
Face anti-spoofing is the key to preventing security breaches in biometric recognition applications.
Deep learning method using RGB and infrared images demands a large amount of training data for new attacks.
We present a face anti-spoofing method in a real-world scenario by automatic learning the physical characteristics in polarization images of a real face.
arXiv Detail & Related papers (2020-03-18T03:04:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.