Joint Physical-Digital Facial Attack Detection Via Simulating Spoofing Clues
- URL: http://arxiv.org/abs/2404.08450v1
- Date: Fri, 12 Apr 2024 13:01:22 GMT
- Title: Joint Physical-Digital Facial Attack Detection Via Simulating Spoofing Clues
- Authors: Xianhua He, Dashuang Liang, Song Yang, Zhanlong Hao, Hui Ma, Binjie Mao, Xi Li, Yao Wang, Pengfei Yan, Ajian Liu,
- Abstract summary: We propose an innovative approach to jointly detect physical and digital attacks within a single model.
Our approach mainly contains two types of data augmentation, which we call Simulated Physical Spoofing Clues augmentation (SPSC) and Simulated Digital Spoofing Clues augmentation (SDSC)
Our method won first place in "Unified Physical-Digital Face Attack Detection" of the 5th Face Anti-spoofing Challenge@CVPR2024.
- Score: 17.132170955620047
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Face recognition systems are frequently subjected to a variety of physical and digital attacks of different types. Previous methods have achieved satisfactory performance in scenarios that address physical attacks and digital attacks, respectively. However, few methods are considered to integrate a model that simultaneously addresses both physical and digital attacks, implying the necessity to develop and maintain multiple models. To jointly detect physical and digital attacks within a single model, we propose an innovative approach that can adapt to any network architecture. Our approach mainly contains two types of data augmentation, which we call Simulated Physical Spoofing Clues augmentation (SPSC) and Simulated Digital Spoofing Clues augmentation (SDSC). SPSC and SDSC augment live samples into simulated attack samples by simulating spoofing clues of physical and digital attacks, respectively, which significantly improve the capability of the model to detect "unseen" attack types. Extensive experiments show that SPSC and SDSC can achieve state-of-the-art generalization in Protocols 2.1 and 2.2 of the UniAttackData dataset, respectively. Our method won first place in "Unified Physical-Digital Face Attack Detection" of the 5th Face Anti-spoofing Challenge@CVPR2024. Our final submission obtains 3.75% APCER, 0.93% BPCER, and 2.34% ACER, respectively. Our code is available at https://github.com/Xianhua-He/cvpr2024-face-anti-spoofing-challenge.
Related papers
- UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Principles of Designing Robust Remote Face Anti-Spoofing Systems [60.05766968805833]
This paper sheds light on the vulnerabilities of state-of-the-art face anti-spoofing methods against digital attacks.
It presents a comprehensive taxonomy of common threats encountered in face anti-spoofing systems.
arXiv Detail & Related papers (2024-06-06T02:05:35Z) - Unified Physical-Digital Attack Detection Challenge [70.67222784932528]
Face Anti-Spoofing (FAS) is crucial to safeguard Face Recognition (FR) Systems.
UniAttackData is the largest public dataset for Unified Attack Detection.
We organized a Unified Physical-Digital Face Attack Detection Challenge to boost the research in Unified Attack Detections.
arXiv Detail & Related papers (2024-04-09T11:00:11Z) - Unified Physical-Digital Face Attack Detection [66.14645299430157]
Face Recognition (FR) systems can suffer from physical (i.e., print photo) and digital (i.e., DeepFake) attacks.
Previous related work rarely considers both situations at the same time.
We propose a Unified Attack Detection framework based on Vision-Language Models (VLMs)
arXiv Detail & Related papers (2024-01-31T09:38:44Z) - Detecting Adversarial Faces Using Only Real Face Self-Perturbations [36.26178169550577]
Adrial attacks aim to disturb the functionality of a target system by adding specific noise to the input samples.
Existing defense techniques achieve high accuracy in detecting some specific adversarial faces (adv-faces)
New attack methods especially GAN-based attacks with completely different noise patterns circumvent them and reach a higher attack success rate.
arXiv Detail & Related papers (2023-04-22T09:55:48Z) - Deep Composite Face Image Attacks: Generation, Vulnerability and
Detection [3.6833521970861685]
Face manipulation attacks have drawn the attention of biometric researchers because of their vulnerability to Face Recognition Systems (FRS)
This paper proposes a novel scheme to generate Composite Face Image Attacks (CFIA) based on facial attributes using Geneversarative Adrial Networks (GANs)
arXiv Detail & Related papers (2022-11-20T17:31:52Z) - A White-Box Adversarial Attack Against a Digital Twin [0.0]
This paper explores the susceptibility of Digital Twin (DT) to adversarial attacks.
We first formulate a DT of a vehicular system using a deep neural network architecture and then utilize it to launch an adversarial attack.
We attack the DT model by perturbing the input to the trained model and show how easily the model can be broken with white-box attacks.
arXiv Detail & Related papers (2022-10-25T13:41:02Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - 3D Face Anti-spoofing with Factorized Bilinear Coding [35.30886962572515]
We propose a novel anti-spoofing method from the perspective of fine-grained classification.
By extracting discriminative and fusing complementary information from RGB and YCbCr spaces, we have developed a principled solution to 3D face spoofing detection.
arXiv Detail & Related papers (2020-05-12T03:09:20Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.