Asymmetric Modality Translation For Face Presentation Attack Detection
- URL: http://arxiv.org/abs/2110.09108v2
- Date: Wed, 20 Oct 2021 11:50:16 GMT
- Title: Asymmetric Modality Translation For Face Presentation Attack Detection
- Authors: Zhi Li, Haoliang Li, Xin Luo, Yongjian Hu, Kwok-Yan Lam, Alex C. Kot
- Abstract summary: Face presentation attack detection (PAD) is an essential measure to protect face recognition systems from being spoofed by malicious users.
We propose a novel framework based on asymmetric modality translation forPAD in bi-modality scenarios.
Our method achieves state-of-the-art performance under different evaluation protocols.
- Score: 55.09300842243827
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face presentation attack detection (PAD) is an essential measure to protect
face recognition systems from being spoofed by malicious users and has
attracted great attention from both academia and industry. Although most of the
existing methods can achieve desired performance to some extent, the
generalization issue of face presentation attack detection under cross-domain
settings (e.g., the setting of unseen attacks and varying illumination) remains
to be solved. In this paper, we propose a novel framework based on asymmetric
modality translation for face presentation attack detection in bi-modality
scenarios. Under the framework, we establish connections between two modality
images of genuine faces. Specifically, a novel modality fusion scheme is
presented that the image of one modality is translated to the other one through
an asymmetric modality translator, then fused with its corresponding paired
image. The fusion result is fed as the input to a discriminator for inference.
The training of the translator is supervised by an asymmetric modality
translation loss. Besides, an illumination normalization module based on
Pattern of Local Gravitational Force (PLGF) representation is used to reduce
the impact of illumination variation. We conduct extensive experiments on three
public datasets, which validate that our method is effective in detecting
various types of attacks and achieves state-of-the-art performance under
different evaluation protocols.
Related papers
- Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - Machine Translation Models Stand Strong in the Face of Adversarial
Attacks [2.6862667248315386]
Our research focuses on the impact of adversarial attacks on sequence-to-sequence (seq2seq) models, specifically machine translation models.
We introduce algorithms that incorporate basic text perturbations and more advanced strategies, such as the gradient-based attack.
arXiv Detail & Related papers (2023-09-10T11:22:59Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Uncertainty-based Detection of Adversarial Attacks in Semantic
Segmentation [16.109860499330562]
We introduce an uncertainty-based approach for the detection of adversarial attacks in semantic segmentation.
We demonstrate the ability of our approach to detect perturbed images across multiple types of adversarial attacks.
arXiv Detail & Related papers (2023-05-22T08:36:35Z) - Semantic Image Attack for Visual Model Diagnosis [80.36063332820568]
In practice, metric analysis on a specific train and test dataset does not guarantee reliable or fair ML models.
This paper proposes Semantic Image Attack (SIA), a method based on the adversarial attack that provides semantic adversarial images.
arXiv Detail & Related papers (2023-03-23T03:13:04Z) - Learning Polysemantic Spoof Trace: A Multi-Modal Disentanglement Network
for Face Anti-spoofing [34.44061534596512]
This paper presents a multi-modal disentanglement model which targetedly learns polysemantic spoof traces for more accurate and robust generic attack detection.
In particular, based on the adversarial learning mechanism, a two-stream disentangling network is designed to estimate spoof patterns from the RGB and depth inputs, respectively.
arXiv Detail & Related papers (2022-12-07T20:23:51Z) - Attacking Face Recognition with T-shirts: Database, Vulnerability
Assessment and Detection [0.0]
We propose a new T-shirt Face Presentation Attack database of 1,608 T-shirt attacks using 100 unique presentation attack instruments.
We show that this type of attack can compromise the security of face recognition systems and that some state-of-the-art attack detection mechanisms fail to robustly generalize to the new attacks.
arXiv Detail & Related papers (2022-11-14T14:11:23Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.