Disentangled Representation with Dual-stage Feature Learning for Face
Anti-spoofing
- URL: http://arxiv.org/abs/2110.09157v1
- Date: Mon, 18 Oct 2021 10:22:52 GMT
- Title: Disentangled Representation with Dual-stage Feature Learning for Face
Anti-spoofing
- Authors: Yu-Chun Wang, Chien-Yi Wang, Shang-Hong Lai
- Abstract summary: It is essential to learn more generalized and discriminative features to prevent overfitting to pre-defined spoof attack types.
This paper proposes a novel dual-stage disentangled representation learning method that can efficiently untangle spoof-related features from irrelevant ones.
- Score: 18.545438302664756
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As face recognition is widely used in diverse security-critical applications,
the study of face anti-spoofing (FAS) has attracted more and more attention.
Several FAS methods have achieved promising performances if the attack types in
the testing data are the same as training data, while the performance
significantly degrades for unseen attack types. It is essential to learn more
generalized and discriminative features to prevent overfitting to pre-defined
spoof attack types. This paper proposes a novel dual-stage disentangled
representation learning method that can efficiently untangle spoof-related
features from irrelevant ones. Unlike previous FAS disentanglement works with
one-stage architecture, we found that the dual-stage training design can
improve the training stability and effectively encode the features to detect
unseen attack types. Our experiments show that the proposed method provides
superior accuracy than the state-of-the-art methods on several cross-type FAS
benchmarks.
Related papers
- UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Hyp-OC: Hyperbolic One Class Classification for Face Anti-Spoofing [30.6907043124415]
Face recognition systems are vulnerable to spoofing attacks and can easily be circumvented.
Most prior research in face anti-spoofing (FAS) approaches it as a two-class classification task.
We reformulate the face anti-spoofing task from a one-class perspective and propose a novel hyperbolic one-class classification framework.
arXiv Detail & Related papers (2024-04-22T17:59:18Z) - Contactless Fingerprint Biometric Anti-Spoofing: An Unsupervised Deep
Learning Approach [0.0]
We introduce an innovative anti-spoofing approach that combines an unsupervised autoencoder with a convolutional block attention module.
The scheme has achieved an average BPCER of 0.96% with an APCER of 1.6% for presentation attacks involving various types of spoofed samples.
arXiv Detail & Related papers (2023-11-07T17:19:59Z) - CrossDF: Improving Cross-Domain Deepfake Detection with Deep Information Decomposition [53.860796916196634]
We propose a Deep Information Decomposition (DID) framework to enhance the performance of Cross-dataset Deepfake Detection (CrossDF)
Unlike most existing deepfake detection methods, our framework prioritizes high-level semantic features over specific visual artifacts.
It adaptively decomposes facial features into deepfake-related and irrelevant information, only using the intrinsic deepfake-related information for real/fake discrimination.
arXiv Detail & Related papers (2023-09-30T12:30:25Z) - Hyperbolic Face Anti-Spoofing [21.981129022417306]
We propose to learn richer hierarchical and discriminative spoofing cues in hyperbolic space.
For unimodal FAS learning, the feature embeddings are projected into the Poincar'e ball, and then the hyperbolic binary logistic regression layer is cascaded for classification.
To alleviate the vanishing gradient problem in hyperbolic space, a new feature clipping method is proposed to enhance the training stability of hyperbolic models.
arXiv Detail & Related papers (2023-08-17T17:18:21Z) - Towards General Visual-Linguistic Face Forgery Detection [95.73987327101143]
Deepfakes are realistic face manipulations that can pose serious threats to security, privacy, and trust.
Existing methods mostly treat this task as binary classification, which uses digital labels or mask signals to train the detection model.
We propose a novel paradigm named Visual-Linguistic Face Forgery Detection(VLFFD), which uses fine-grained sentence-level prompts as the annotation.
arXiv Detail & Related papers (2023-07-31T10:22:33Z) - Federated Test-Time Adaptive Face Presentation Attack Detection with
Dual-Phase Privacy Preservation [100.69458267888962]
Face presentation attack detection (fPAD) plays a critical role in the modern face recognition pipeline.
Due to legal and privacy issues, training data (real face images and spoof images) are not allowed to be directly shared between different data sources.
We propose a Federated Test-Time Adaptive Face Presentation Attack Detection with Dual-Phase Privacy Preservation framework.
arXiv Detail & Related papers (2021-10-25T02:51:05Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - FADER: Fast Adversarial Example Rejection [19.305796826768425]
Recent defenses have been shown to improve adversarial robustness by detecting anomalous deviations from legitimate training samples at different layer representations.
We introduce FADER, a novel technique for speeding up detection-based methods.
Our experiments outline up to 73x prototypes reduction compared to analyzed detectors for MNIST dataset and up to 50x for CIFAR10 respectively.
arXiv Detail & Related papers (2020-10-18T22:00:11Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.