Face Presentation Attack Detection by Excavating Causal Clues and
Adapting Embedding Statistics
- URL: http://arxiv.org/abs/2308.14551v1
- Date: Mon, 28 Aug 2023 13:11:05 GMT
- Title: Face Presentation Attack Detection by Excavating Causal Clues and
Adapting Embedding Statistics
- Authors: Meiling Fang and Naser Damer
- Abstract summary: Face presentation attack detection (PAD) uses domain adaptation (DA) and domain generalization (DG) techniques to address performance degradation on unknown domains.
Most DG-based PAD solutions rely on a priori, i.e., known domain labels.
This paper proposes to model face PAD as a compound DG task from a causal perspective, linking it to model optimization.
- Score: 9.612556145185431
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent face presentation attack detection (PAD) leverages domain adaptation
(DA) and domain generalization (DG) techniques to address performance
degradation on unknown domains. However, DA-based PAD methods require access to
unlabeled target data, while most DG-based PAD solutions rely on a priori,
i.e., known domain labels. Moreover, most DA-/DG-based methods are
computationally intensive, demanding complex model architectures and/or
multi-stage training processes. This paper proposes to model face PAD as a
compound DG task from a causal perspective, linking it to model optimization.
We excavate the causal factors hidden in the high-level representation via
counterfactual intervention. Moreover, we introduce a class-guided MixStyle to
enrich feature-level data distribution within classes instead of focusing on
domain information. Both class-guided MixStyle and counterfactual intervention
components introduce no extra trainable parameters and negligible computational
resources. Extensive cross-dataset and analytic experiments demonstrate the
effectiveness and efficiency of our method compared to state-of-the-art PADs.
The implementation and the trained weights are publicly available.
Related papers
- What Has Been Overlooked in Contrastive Source-Free Domain Adaptation: Leveraging Source-Informed Latent Augmentation within Neighborhood Context [28.634315143647385]
Source-free domain adaptation (SFDA) involves adapting a model originally trained using a labeled dataset to perform effectively on an unlabeled dataset.
This adaptation is especially crucial when significant disparities in data distributions exist between the two domains.
We introduce a straightforward yet highly effective latent augmentation method tailored for contrastive SFDA.
arXiv Detail & Related papers (2024-12-18T20:09:46Z) - Is Large-Scale Pretraining the Secret to Good Domain Generalization? [69.80606575323691]
Multi-Source Domain Generalization (DG) is the task of training on multiple source domains and achieving high classification performance on unseen target domains.
Recent methods combine robust features from web-scale pretrained backbones with new features learned from source data, and this has dramatically improved benchmark results.
We show that all evaluated DG methods struggle on DomainBed-OOP, while recent methods excel on DomainBed-IP.
arXiv Detail & Related papers (2024-12-03T21:43:11Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - On Certifying and Improving Generalization to Unseen Domains [87.00662852876177]
Domain Generalization aims to learn models whose performance remains high on unseen domains encountered at test-time.
It is challenging to evaluate DG algorithms comprehensively using a few benchmark datasets.
We propose a universal certification framework that can efficiently certify the worst-case performance of any DG method.
arXiv Detail & Related papers (2022-06-24T16:29:43Z) - Adversarial Unsupervised Domain Adaptation Guided with Deep Clustering
for Face Presentation Attack Detection [0.8701566919381223]
Face Presentation Attack Detection (PAD) has drawn increasing attentions to secure the face recognition systems.
We propose an end-to-end learning framework based on Domain Adaptation (DA) to improve PAD generalization capability.
arXiv Detail & Related papers (2021-02-13T05:34:40Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.