Mitigating Domain Mismatch in Face Recognition Using Style Matching
- URL: http://arxiv.org/abs/2102.13327v1
- Date: Fri, 26 Feb 2021 06:43:50 GMT
- Title: Mitigating Domain Mismatch in Face Recognition Using Style Matching
- Authors: Chun-Hsien Lin and Bing-Fei Wu
- Abstract summary: We formulate domain mismatch in face recognition as a style mismatch problem for which we propose two methods.
First, we design a domain discriminator with human-level judgment to mine target-like images in the training data to mitigate the domain gap.
Second, we extract style representations in low-level feature maps of the backbone model, and match the style distributions of the two domains to find a common style representation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite outstanding performance on public benchmarks, face recognition still
suffers due to domain mismatch between training (source) and testing (target)
data. Furthermore, these domains are not shared classes, which complicates
domain adaptation. Since this is also a fine-grained classification problem
which does not strictly follow the low-density separation principle,
conventional domain adaptation approaches do not resolve these problems. In
this paper, we formulate domain mismatch in face recognition as a style
mismatch problem for which we propose two methods. First, we design a domain
discriminator with human-level judgment to mine target-like images in the
training data to mitigate the domain gap. Second, we extract style
representations in low-level feature maps of the backbone model, and match the
style distributions of the two domains to find a common style representation.
Evaluations on verification and open-set and closed-set identification
protocols show that both methods yield good improvements, and that performance
is more robust if they are combined. Our approach is competitive with related
work, and its effectiveness is verified in a practical application.
Related papers
- Cross-Domain Policy Adaptation by Capturing Representation Mismatch [53.087413751430255]
It is vital to learn effective policies that can be transferred to different domains with dynamics discrepancies in reinforcement learning (RL)
In this paper, we consider dynamics adaptation settings where there exists dynamics mismatch between the source domain and the target domain.
We perform representation learning only in the target domain and measure the representation deviations on the transitions from the source domain.
arXiv Detail & Related papers (2024-05-24T09:06:12Z) - One-Class Knowledge Distillation for Face Presentation Attack Detection [53.30584138746973]
This paper introduces a teacher-student framework to improve the cross-domain performance of face PAD with one-class domain adaptation.
Student networks are trained to mimic the teacher network and learn similar representations for genuine face samples of the target domain.
In the test phase, the similarity score between the representations of the teacher and student networks is used to distinguish attacks from genuine ones.
arXiv Detail & Related papers (2022-05-08T06:20:59Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Cross-Domain Similarity Learning for Face Recognition in Unseen Domains [90.35908506994365]
We introduce a novel cross-domain metric learning loss, which we dub Cross-Domain Triplet (CDT) loss, to improve face recognition in unseen domains.
The CDT loss encourages learning semantically meaningful features by enforcing compact feature clusters of identities from one domain.
Our method does not require careful hard-pair sample mining and filtering strategy during training.
arXiv Detail & Related papers (2021-03-12T19:48:01Z) - Domain Adapting Ability of Self-Supervised Learning for Face Recognition [0.0]
Deep convolutional networks have achieved great performance in face recognition tasks.
The challenge of domain discrepancy still exists in real world applications.
In this paper, self-supervised learning is adopted to learn a better embedding space.
arXiv Detail & Related papers (2021-02-26T06:23:14Z) - Class Distribution Alignment for Adversarial Domain Adaptation [32.95056492475652]
Conditional ADversarial Image Translation (CADIT) is proposed to explicitly align the class distributions given samples between the two domains.
It integrates a discriminative structure-preserving loss and a joint adversarial generation loss.
Our approach achieves superior classification in the target domain when compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-04-20T15:58:11Z) - Cross-domain Detection via Graph-induced Prototype Alignment [114.8952035552862]
We propose a Graph-induced Prototype Alignment (GPA) framework to seek for category-level domain alignment.
In addition, in order to alleviate the negative effect of class-imbalance on domain adaptation, we design a Class-reweighted Contrastive Loss.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-03-28T17:46:55Z) - Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation [105.96860932833759]
State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue.
We propose to improve the semantic-level alignment with different strategies for stuff regions and for things.
In addition to our proposed method, we show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains.
arXiv Detail & Related papers (2020-03-18T04:43:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.