Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing
- URL: http://arxiv.org/abs/2105.02453v1
- Date: Thu, 6 May 2021 06:04:59 GMT
- Title: Generalizable Representation Learning for Mixture Domain Face
Anti-Spoofing
- Authors: Zhihong Chen, Taiping Yao, Kekai Sheng, Shouhong Ding, Ying Tai, Jilin
Li, Feiyue Huang, Xinyu Jin
- Abstract summary: Face anti-spoofing approach based on domain generalization(DG) has drawn growing attention due to its robustness forunseen scenarios.
We propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
To overcome the limitation, we propose domain dy-namic adjustment meta-learning (D2AM) without using do-main labels.
- Score: 53.82826073959756
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face anti-spoofing approach based on domain generalization(DG) has drawn
growing attention due to its robustness forunseen scenarios. Existing DG
methods assume that the do-main label is known.However, in real-world
applications, thecollected dataset always contains mixture domains, where
thedomain label is unknown. In this case, most of existing meth-ods may not
work. Further, even if we can obtain the domainlabel as existing methods, we
think this is just a sub-optimalpartition. To overcome the limitation, we
propose domain dy-namic adjustment meta-learning (D2AM) without using do-main
labels, which iteratively divides mixture domains viadiscriminative domain
representation and trains a generaliz-able face anti-spoofing with
meta-learning. Specifically, wedesign a domain feature based on Instance
Normalization(IN) and propose a domain representation learning module(DRLM) to
extract discriminative domain features for cluster-ing. Moreover, to reduce the
side effect of outliers on cluster-ing performance, we additionally utilize
maximum mean dis-crepancy (MMD) to align the distribution of sample featuresto
a prior distribution, which improves the reliability of clus tering. Extensive
experiments show that the proposed methodoutperforms conventional DG-based face
anti-spoofing meth-ods, including those utilizing domain labels. Furthermore,
weenhance the interpretability through visualizatio
Related papers
- DISPEL: Domain Generalization via Domain-Specific Liberating [19.21625050855744]
Domain generalization aims to learn a model that can perform well on unseen test domains by only training on limited source domains.
We propose DomaIn-SPEcific Liberating (DISPEL), a post-processing fine-grained masking approach that can filter out undefined and indistinguishable domain-specific features in the embedding space.
arXiv Detail & Related papers (2023-07-14T06:21:03Z) - Cyclically Disentangled Feature Translation for Face Anti-spoofing [61.70377630461084]
We propose a novel domain adaptation method called cyclically disentangled feature translation network (CDFTN)
CDFTN generates pseudo-labeled samples that possess: 1) source domain-invariant liveness features and 2) target domain-specific content features, which are disentangled through domain adversarial training.
A robust classifier is trained based on the synthetic pseudo-labeled images under the supervision of source domain labels.
arXiv Detail & Related papers (2022-12-07T14:12:34Z) - Cross-Domain Ensemble Distillation for Domain Generalization [17.575016642108253]
We propose a simple yet effective method for domain generalization, named cross-domain ensemble distillation (XDED)
Our method generates an ensemble of the output logits from training data with the same label but from different domains and then penalizes each output for the mismatch with the ensemble.
We show that models learned by our method are robust against adversarial attacks and image corruptions.
arXiv Detail & Related papers (2022-11-25T12:32:36Z) - Localized Adversarial Domain Generalization [83.4195658745378]
Adversarial domain generalization is a popular approach to domain generalization.
We propose localized adversarial domain generalization with space compactness maintenance(LADG)
We conduct comprehensive experiments on the Wilds DG benchmark to validate our approach.
arXiv Detail & Related papers (2022-05-09T08:30:31Z) - Compound Domain Generalization via Meta-Knowledge Encoding [55.22920476224671]
We introduce Style-induced Domain-specific Normalization (SDNorm) to re-normalize the multi-modal underlying distributions.
We harness the prototype representations, the centroids of classes, to perform relational modeling in the embedding space.
Experiments on four standard Domain Generalization benchmarks reveal that COMEN exceeds the state-of-the-art performance without the need of domain supervision.
arXiv Detail & Related papers (2022-03-24T11:54:59Z) - META: Mimicking Embedding via oThers' Aggregation for Generalizable
Person Re-identification [68.39849081353704]
Domain generalizable (DG) person re-identification (ReID) aims to test across unseen domains without access to the target domain data at training time.
This paper presents a new approach called Mimicking Embedding via oThers' Aggregation (META) for DG ReID.
arXiv Detail & Related papers (2021-12-16T08:06:50Z) - Select, Label, and Mix: Learning Discriminative Invariant Feature
Representations for Partial Domain Adaptation [55.73722120043086]
We develop a "Select, Label, and Mix" (SLM) framework to learn discriminative invariant feature representations for partial domain adaptation.
First, we present a simple yet efficient "select" module that automatically filters out outlier source samples to avoid negative transfer.
Second, the "label" module iteratively trains the classifier using both the labeled source domain data and the generated pseudo-labels for the target domain to enhance the discriminability of the latent space.
arXiv Detail & Related papers (2020-12-06T19:29:32Z) - Unsupervised Cross-domain Image Classification by Distance Metric Guided
Feature Alignment [11.74643883335152]
Unsupervised domain adaptation is a promising avenue which transfers knowledge from a source domain to a target domain.
We propose distance metric guided feature alignment (MetFA) to extract discriminative as well as domain-invariant features on both source and target domains.
Our model integrates class distribution alignment to transfer semantic knowledge from a source domain to a target domain.
arXiv Detail & Related papers (2020-08-19T13:36:57Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.