Diffusion-Driven Deceptive Patches: Adversarial Manipulation and Forensic Detection in Facial Identity Verification
- URL: http://arxiv.org/abs/2601.09806v1
- Date: Wed, 14 Jan 2026 19:12:54 GMT
- Title: Diffusion-Driven Deceptive Patches: Adversarial Manipulation and Forensic Detection in Facial Identity Verification
- Authors: Shahrzad Sayyafzadeh, Hongmei Chi, Shonda Bernadin,
- Abstract summary: This work presents an end-to-end pipeline for generating, refining, and evaluating adversarial patches to compromise facial biometric systems.<n>A refined patch is applied to facial images to test its ability to evade recognition systems while maintaining natural visual characteristics.<n>The pipeline evaluates changes in identity classification, captioning results, and vulnerabilities in facial identity verification and expression recognition under adversarial conditions.
- Score: 0.20973843981871568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work presents an end-to-end pipeline for generating, refining, and evaluating adversarial patches to compromise facial biometric systems, with applications in forensic analysis and security testing. We utilize FGSM to generate adversarial noise targeting an identity classifier and employ a diffusion model with reverse diffusion to enhance imperceptibility through Gaussian smoothing and adaptive brightness correction, thereby facilitating synthetic adversarial patch evasion. The refined patch is applied to facial images to test its ability to evade recognition systems while maintaining natural visual characteristics. A Vision Transformer (ViT)-GPT2 model generates captions to provide a semantic description of a person's identity for adversarial images, supporting forensic interpretation and documentation for identity evasion and recognition attacks. The pipeline evaluates changes in identity classification, captioning results, and vulnerabilities in facial identity verification and expression recognition under adversarial conditions. We further demonstrate effective detection and analysis of adversarial patches and adversarial samples using perceptual hashing and segmentation, achieving an SSIM of 0.95.
Related papers
- SIDeR: Semantic Identity Decoupling for Unrestricted Face Privacy [53.75084833636302]
We propose SIDeR, a Semantic decoupling-driven framework for unrestricted face privacy protection.<n> SIDeR decomposes a facial image into a machine-recognizable identity feature vector and a visually perceptible semantic appearance component.<n>For authorized access, SIDeR can be restored to its original form when the correct password is provided.
arXiv Detail & Related papers (2026-02-04T19:30:48Z) - From Detection to Correction: Backdoor-Resilient Face Recognition via Vision-Language Trigger Detection and Noise-Based Neutralization [2.661968537236039]
Backdoor attacks can subvert face recognition systems powered by deep neural networks (DNNs)<n>We propose TrueBiometric: Trustworthy Biometrics, which accurately detects poisoned images using a majority voting mechanism.<n>Our empirical results demonstrate that TrueBiometric detects and corrects poisoned images with 100% accuracy without compromising accuracy on clean images.
arXiv Detail & Related papers (2025-08-07T14:02:34Z) - Leveraging Intermediate Features of Vision Transformer for Face Anti-Spoofing [0.11184789007828977]
We propose a spoofing attack detection method based on Vision Transformer (ViT) to detect minute differences between live and spoofed face images.<n>The proposed method also introduces two data augmentation methods: face anti-sfing data augmentation and patch-wise data augmentation.<n>We demonstrate the effectiveness of the proposed method through experiments using the OULU-NPU and SiW datasets.
arXiv Detail & Related papers (2025-05-30T09:33:01Z) - ID-Guard: A Universal Framework for Combating Facial Manipulation via Breaking Identification [60.73617868629575]
misuse of deep learning-based facial manipulation poses a significant threat to civil rights.<n>To prevent this fraud at its source, proactive defense has been proposed to disrupt the manipulation process.<n>This paper proposes a universal framework for combating facial manipulation, termed ID-Guard.
arXiv Detail & Related papers (2024-09-20T09:30:08Z) - StealthDiffusion: Towards Evading Diffusion Forensic Detection through Diffusion Model [62.25424831998405]
StealthDiffusion is a framework that modifies AI-generated images into high-quality, imperceptible adversarial examples.
It is effective in both white-box and black-box settings, transforming AI-generated images into high-quality adversarial forgeries.
arXiv Detail & Related papers (2024-08-11T01:22:29Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - CoReFace: Sample-Guided Contrastive Regularization for Deep Face
Recognition [3.1677775852317085]
We propose Contrastive Regularization for Face recognition (CoReFace) to apply image-level regularization in feature representation learning.
Specifically, we employ sample-guided contrastive learning to regularize the training with the image-image relationship directly.
To integrate contrastive learning into face recognition, we augment embeddings instead of images to avoid the image quality degradation.
arXiv Detail & Related papers (2023-04-23T14:33:24Z) - Diff-ID: An Explainable Identity Difference Quantification Framework for
DeepFake Detection [41.03606237571299]
We propose Diff-ID, a concise and effective approach that explains and measures the identity loss induced by facial manipulations.
When testing on an image of a specific person, Diff-ID utilizes an authentic image of that person as a reference and aligns them to the same identity-insensitive attribute feature space.
We then visualize the identity loss between the test and the reference image from the image differences of the aligned pairs, and design a custom metric to quantify the identity loss.
arXiv Detail & Related papers (2023-03-30T10:10:20Z) - Towards Intrinsic Common Discriminative Features Learning for Face
Forgery Detection using Adversarial Learning [59.548960057358435]
We propose a novel method which utilizes adversarial learning to eliminate the negative effect of different forgery methods and facial identities.
Our face forgery detection model learns to extract common discriminative features through eliminating the effect of forgery methods and facial identities.
arXiv Detail & Related papers (2022-07-08T09:23:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.