Learning Polysemantic Spoof Trace: A Multi-Modal Disentanglement Network
for Face Anti-spoofing
- URL: http://arxiv.org/abs/2212.03943v1
- Date: Wed, 7 Dec 2022 20:23:51 GMT
- Title: Learning Polysemantic Spoof Trace: A Multi-Modal Disentanglement Network
for Face Anti-spoofing
- Authors: Kaicheng Li, Hongyu Yang, Binghui Chen, Pengyu Li, Biao Wang, Di Huang
- Abstract summary: This paper presents a multi-modal disentanglement model which targetedly learns polysemantic spoof traces for more accurate and robust generic attack detection.
In particular, based on the adversarial learning mechanism, a two-stream disentangling network is designed to estimate spoof patterns from the RGB and depth inputs, respectively.
- Score: 34.44061534596512
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Along with the widespread use of face recognition systems, their
vulnerability has become highlighted. While existing face anti-spoofing methods
can be generalized between attack types, generic solutions are still
challenging due to the diversity of spoof characteristics. Recently, the spoof
trace disentanglement framework has shown great potential for coping with both
seen and unseen spoof scenarios, but the performance is largely restricted by
the single-modal input. This paper focuses on this issue and presents a
multi-modal disentanglement model which targetedly learns polysemantic spoof
traces for more accurate and robust generic attack detection. In particular,
based on the adversarial learning mechanism, a two-stream disentangling network
is designed to estimate spoof patterns from the RGB and depth inputs,
respectively. In this case, it captures complementary spoofing clues inhering
in different attacks. Furthermore, a fusion module is exploited, which
recalibrates both representations at multiple stages to promote the
disentanglement in each individual modality. It then performs cross-modality
aggregation to deliver a more comprehensive spoof trace representation for
prediction. Extensive evaluations are conducted on multiple benchmarks,
demonstrating that learning polysemantic spoof traces favorably contributes to
anti-spoofing with more perceptible and interpretable results.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - SHIELD : An Evaluation Benchmark for Face Spoofing and Forgery Detection
with Multimodal Large Language Models [63.946809247201905]
We introduce a new benchmark, namely SHIELD, to evaluate the ability of MLLMs on face spoofing and forgery detection.
We design true/false and multiple-choice questions to evaluate multimodal face data in these two face security tasks.
The results indicate that MLLMs hold substantial potential in the face security domain.
arXiv Detail & Related papers (2024-02-06T17:31:36Z) - Hyperbolic Face Anti-Spoofing [21.981129022417306]
We propose to learn richer hierarchical and discriminative spoofing cues in hyperbolic space.
For unimodal FAS learning, the feature embeddings are projected into the Poincar'e ball, and then the hyperbolic binary logistic regression layer is cascaded for classification.
To alleviate the vanishing gradient problem in hyperbolic space, a new feature clipping method is proposed to enhance the training stability of hyperbolic models.
arXiv Detail & Related papers (2023-08-17T17:18:21Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing
Models [13.057451851710924]
Face anti-spoofing aims to discriminate the spoofing face images (e.g., printed photos) from live ones.
Previous works conducted adversarial attack methods to evaluate the face anti-spoofing performance.
We propose a novel framework to expose the fine-grained adversarial vulnerability of the face anti-spoofing models.
arXiv Detail & Related papers (2022-05-30T04:56:33Z) - Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth
Uncertainty Learning [54.15303628138665]
Face anti-spoofing (FAS) plays a vital role in preventing face recognition systems from presentation attacks.
Existing face anti-spoofing datasets lack diversity due to the insufficient identity and insignificant variance.
We propose Dual Spoof Disentanglement Generation framework to tackle this challenge by "anti-spoofing via generation"
arXiv Detail & Related papers (2021-12-01T15:36:59Z) - Learning to Separate Clusters of Adversarial Representations for Robust
Adversarial Detection [50.03939695025513]
We propose a new probabilistic adversarial detector motivated by a recently introduced non-robust feature.
In this paper, we consider the non-robust features as a common property of adversarial examples, and we deduce it is possible to find a cluster in representation space corresponding to the property.
This idea leads us to probability estimate distribution of adversarial representations in a separate cluster, and leverage the distribution for a likelihood based adversarial detector.
arXiv Detail & Related papers (2020-12-07T07:21:18Z) - On Disentangling Spoof Trace for Generic Face Anti-Spoofing [24.75975874643976]
Key to face anti-spoofing lies in subtle image pattern, termed "spoof trace"
This work designs a novel adversarial learning framework to disentangle the spoof traces from input faces.
arXiv Detail & Related papers (2020-07-17T23:14:16Z) - Learning Generalized Spoof Cues for Face Anti-spoofing [43.32561471100592]
We propose a residual-learning framework to learn the discriminative live-spoof differences which are defined as the spoof cues.
The generator minimizes the spoof cues of live samples while imposes no explicit constraint on those of spoof samples to generalize well to unseen attacks.
We conduct extensive experiments and the experimental results show the proposed method consistently outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2020-05-08T09:22:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.