Face Presentation Attack Detection by Excavating Causal Clues and
Adapting Embedding Statistics
- URL: http://arxiv.org/abs/2308.14551v1
- Date: Mon, 28 Aug 2023 13:11:05 GMT
- Title: Face Presentation Attack Detection by Excavating Causal Clues and
Adapting Embedding Statistics
- Authors: Meiling Fang and Naser Damer
- Abstract summary: Face presentation attack detection (PAD) uses domain adaptation (DA) and domain generalization (DG) techniques to address performance degradation on unknown domains.
Most DG-based PAD solutions rely on a priori, i.e., known domain labels.
This paper proposes to model face PAD as a compound DG task from a causal perspective, linking it to model optimization.
- Score: 9.612556145185431
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent face presentation attack detection (PAD) leverages domain adaptation
(DA) and domain generalization (DG) techniques to address performance
degradation on unknown domains. However, DA-based PAD methods require access to
unlabeled target data, while most DG-based PAD solutions rely on a priori,
i.e., known domain labels. Moreover, most DA-/DG-based methods are
computationally intensive, demanding complex model architectures and/or
multi-stage training processes. This paper proposes to model face PAD as a
compound DG task from a causal perspective, linking it to model optimization.
We excavate the causal factors hidden in the high-level representation via
counterfactual intervention. Moreover, we introduce a class-guided MixStyle to
enrich feature-level data distribution within classes instead of focusing on
domain information. Both class-guided MixStyle and counterfactual intervention
components introduce no extra trainable parameters and negligible computational
resources. Extensive cross-dataset and analytic experiments demonstrate the
effectiveness and efficiency of our method compared to state-of-the-art PADs.
The implementation and the trained weights are publicly available.
Related papers
- Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - On Certifying and Improving Generalization to Unseen Domains [87.00662852876177]
Domain Generalization aims to learn models whose performance remains high on unseen domains encountered at test-time.
It is challenging to evaluate DG algorithms comprehensively using a few benchmark datasets.
We propose a universal certification framework that can efficiently certify the worst-case performance of any DG method.
arXiv Detail & Related papers (2022-06-24T16:29:43Z) - Revisiting Deep Subspace Alignment for Unsupervised Domain Adaptation [42.16718847243166]
Unsupervised domain adaptation (UDA) aims to transfer and adapt knowledge from a labeled source domain to an unlabeled target domain.
Traditionally, subspace-based methods form an important class of solutions to this problem.
This paper revisits the use of subspace alignment for UDA and proposes a novel adaptation algorithm that consistently leads to improved generalization.
arXiv Detail & Related papers (2022-01-05T20:16:38Z) - Adversarial Unsupervised Domain Adaptation Guided with Deep Clustering
for Face Presentation Attack Detection [0.8701566919381223]
Face Presentation Attack Detection (PAD) has drawn increasing attentions to secure the face recognition systems.
We propose an end-to-end learning framework based on Domain Adaptation (DA) to improve PAD generalization capability.
arXiv Detail & Related papers (2021-02-13T05:34:40Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.