Unified Physical-Digital Face Attack Detection
- URL: http://arxiv.org/abs/2401.17699v1
- Date: Wed, 31 Jan 2024 09:38:44 GMT
- Title: Unified Physical-Digital Face Attack Detection
- Authors: Hao Fang, Ajian Liu, Haocheng Yuan, Junze Zheng, Dingheng Zeng,
Yanhong Liu, Jiankang Deng, Sergio Escalera, Xiaoming Liu, Jun Wan, Zhen Lei
- Abstract summary: Face Recognition (FR) systems can suffer from physical (i.e., print photo) and digital (i.e., DeepFake) attacks.
Previous related work rarely considers both situations at the same time.
We propose a Unified Attack Detection framework based on Vision-Language Models (VLMs)
- Score: 66.14645299430157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face Recognition (FR) systems can suffer from physical (i.e., print photo)
and digital (i.e., DeepFake) attacks. However, previous related work rarely
considers both situations at the same time. This implies the deployment of
multiple models and thus more computational burden. The main reasons for this
lack of an integrated model are caused by two factors: (1) The lack of a
dataset including both physical and digital attacks with ID consistency which
means the same ID covers the real face and all attack types; (2) Given the
large intra-class variance between these two attacks, it is difficult to learn
a compact feature space to detect both attacks simultaneously. To address these
issues, we collect a Unified physical-digital Attack dataset, called
UniAttackData. The dataset consists of $1,800$ participations of 2 and 12
physical and digital attacks, respectively, resulting in a total of 29,706
videos. Then, we propose a Unified Attack Detection framework based on
Vision-Language Models (VLMs), namely UniAttackDetection, which includes three
main modules: the Teacher-Student Prompts (TSP) module, focused on acquiring
unified and specific knowledge respectively; the Unified Knowledge Mining (UKM)
module, designed to capture a comprehensive feature space; and the Sample-Level
Prompt Interaction (SLPI) module, aimed at grasping sample-level semantics.
These three modules seamlessly form a robust unified attack detection
framework. Extensive experiments on UniAttackData and three other datasets
demonstrate the superiority of our approach for unified face attack detection.
Related papers
- La-SoftMoE CLIP for Unified Physical-Digital Face Attack Detection [27.020392407198948]
Facial recognition systems are susceptible to both physical and digital attacks.
We propose a novel approach that uses the sparse model to handle sparse data.
We introduce a flexible self-adapting weighting mechanism, enabling the model to better fit and adapt.
arXiv Detail & Related papers (2024-08-23T02:12:13Z) - Joint Physical-Digital Facial Attack Detection Via Simulating Spoofing Clues [17.132170955620047]
We propose an innovative approach to jointly detect physical and digital attacks within a single model.
Our approach mainly contains two types of data augmentation, which we call Simulated Physical Spoofing Clues augmentation (SPSC) and Simulated Digital Spoofing Clues augmentation (SDSC)
Our method won first place in "Unified Physical-Digital Face Attack Detection" of the 5th Face Anti-spoofing Challenge@CVPR2024.
arXiv Detail & Related papers (2024-04-12T13:01:22Z) - Unified Physical-Digital Attack Detection Challenge [70.67222784932528]
Face Anti-Spoofing (FAS) is crucial to safeguard Face Recognition (FR) Systems.
UniAttackData is the largest public dataset for Unified Attack Detection.
We organized a Unified Physical-Digital Face Attack Detection Challenge to boost the research in Unified Attack Detections.
arXiv Detail & Related papers (2024-04-09T11:00:11Z) - LESSON: Multi-Label Adversarial False Data Injection Attack for Deep Learning Locational Detection [15.491101949025651]
This paper proposes a general multi-label adversarial attack framework, namely muLti-labEl adverSarial falSe data injectiON attack (LESSON)
Four typical LESSON attacks based on the proposed framework and two dimensions of attack objectives are examined.
arXiv Detail & Related papers (2024-01-29T09:44:59Z) - Susceptibility of Adversarial Attack on Medical Image Segmentation
Models [0.0]
We investigate the effect of adversarial attacks on segmentation models trained on MRI datasets.
We find that medical imaging segmentation models are indeed vulnerable to adversarial attacks.
We show that using a different loss function than the one used for training yields higher adversarial attack success.
arXiv Detail & Related papers (2024-01-20T12:52:20Z) - Your Attack Is Too DUMB: Formalizing Attacker Scenarios for Adversarial
Transferability [17.899587145780817]
Evasion attacks are a threat to machine learning models, where adversaries attempt to affect classifiers by injecting malicious samples.
We propose the DUMB attacker model, which allows analyzing if evasion attacks fail to transfer when the training conditions of surrogate and victim models differ.
Our analysis, which generated 13K tests over 14 distinct attacks, led to numerous novel findings in the scope of transferable attacks with surrogate models.
arXiv Detail & Related papers (2023-06-27T10:21:27Z) - Semantic Image Attack for Visual Model Diagnosis [80.36063332820568]
In practice, metric analysis on a specific train and test dataset does not guarantee reliable or fair ML models.
This paper proposes Semantic Image Attack (SIA), a method based on the adversarial attack that provides semantic adversarial images.
arXiv Detail & Related papers (2023-03-23T03:13:04Z) - Can Adversarial Examples Be Parsed to Reveal Victim Model Information? [62.814751479749695]
In this work, we ask whether it is possible to infer data-agnostic victim model (VM) information from data-specific adversarial instances.
We collect a dataset of adversarial attacks across 7 attack types generated from 135 victim models.
We show that a simple, supervised model parsing network (MPN) is able to infer VM attributes from unseen adversarial attacks.
arXiv Detail & Related papers (2023-03-13T21:21:49Z) - Federated Zero-Shot Learning for Visual Recognition [55.65879596326147]
We propose a novel Federated Zero-Shot Learning FedZSL framework.
FedZSL learns a central model from the decentralized data residing on edge devices.
The effectiveness and robustness of FedZSL are demonstrated by extensive experiments conducted on three zero-shot benchmark datasets.
arXiv Detail & Related papers (2022-09-05T14:49:34Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.