Domain Adapting Ability of Self-Supervised Learning for Face Recognition
- URL: http://arxiv.org/abs/2102.13319v1
- Date: Fri, 26 Feb 2021 06:23:14 GMT
- Title: Domain Adapting Ability of Self-Supervised Learning for Face Recognition
- Authors: Chun-Hsien Lin and Bing-Fei Wu
- Abstract summary: Deep convolutional networks have achieved great performance in face recognition tasks.
The challenge of domain discrepancy still exists in real world applications.
In this paper, self-supervised learning is adopted to learn a better embedding space.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although deep convolutional networks have achieved great performance in face
recognition tasks, the challenge of domain discrepancy still exists in real
world applications. Lack of domain coverage of training data (source domain)
makes the learned models degenerate in a testing scenario (target domain). In
face recognition tasks, classes in two domains are usually different, so
classical domain adaptation approaches, assuming there are shared classes in
domains, may not be reasonable solutions for this problem. In this paper,
self-supervised learning is adopted to learn a better embedding space where the
subjects in target domain are more distinguishable. The learning goal is
maximizing the similarity between the embeddings of each image and its mirror
in both domains. The experiments show its competitive results compared with
prior works. To know the reason why it can achieve such performance, we further
discuss how this approach affects the learning of embeddings.
Related papers
- Cross-Domain Policy Adaptation by Capturing Representation Mismatch [53.087413751430255]
It is vital to learn effective policies that can be transferred to different domains with dynamics discrepancies in reinforcement learning (RL)
In this paper, we consider dynamics adaptation settings where there exists dynamics mismatch between the source domain and the target domain.
We perform representation learning only in the target domain and measure the representation deviations on the transitions from the source domain.
arXiv Detail & Related papers (2024-05-24T09:06:12Z) - Deep face recognition with clustering based domain adaptation [57.29464116557734]
We propose a new clustering-based domain adaptation method designed for face recognition task in which the source and target domain do not share any classes.
Our method effectively learns the discriminative target feature by aligning the feature domain globally, and, at the meantime, distinguishing the target clusters locally.
arXiv Detail & Related papers (2022-05-27T12:29:11Z) - Domain-Invariant Proposals based on a Balanced Domain Classifier for
Object Detection [8.583307102907295]
Object recognition from images means to automatically find object(s) of interest and to return their category and location information.
Benefiting from research on deep learning, like convolutional neural networks(CNNs) and generative adversarial networks, the performance in this field has been improved significantly.
mismatching distributions, i.e., domain shifts, lead to a significant performance drop.
arXiv Detail & Related papers (2022-02-12T00:21:27Z) - Domain Adaptive Semantic Segmentation without Source Data [50.18389578589789]
We investigate domain adaptive semantic segmentation without source data, which assumes that the model is pre-trained on the source domain.
We propose an effective framework for this challenging problem with two components: positive learning and negative learning.
Our framework can be easily implemented and incorporated with other methods to further enhance the performance.
arXiv Detail & Related papers (2021-10-13T04:12:27Z) - Gradient Regularized Contrastive Learning for Continual Domain
Adaptation [86.02012896014095]
We study the problem of continual domain adaptation, where the model is presented with a labelled source domain and a sequence of unlabelled target domains.
We propose Gradient Regularized Contrastive Learning (GRCL) to solve the obstacles.
Experiments on Digits, DomainNet and Office-Caltech benchmarks demonstrate the strong performance of our approach.
arXiv Detail & Related papers (2021-03-23T04:10:42Z) - Mitigating Domain Mismatch in Face Recognition Using Style Matching [0.0]
We formulate domain mismatch in face recognition as a style mismatch problem for which we propose two methods.
First, we design a domain discriminator with human-level judgment to mine target-like images in the training data to mitigate the domain gap.
Second, we extract style representations in low-level feature maps of the backbone model, and match the style distributions of the two domains to find a common style representation.
arXiv Detail & Related papers (2021-02-26T06:43:50Z) - Contradistinguisher: A Vapnik's Imperative to Unsupervised Domain
Adaptation [7.538482310185133]
We propose a model referred Contradistinguisher that learns contrastive features and whose objective is to jointly learn to contradistinguish the unlabeled target domain in an unsupervised way.
We achieve the state-of-the-art on Office-31 and VisDA-2017 datasets in both single-source and multi-source settings.
arXiv Detail & Related papers (2020-05-25T19:54:38Z) - Extending and Analyzing Self-Supervised Learning Across Domains [50.13326427158233]
Self-supervised representation learning has achieved impressive results in recent years.
Experiments primarily come on ImageNet or other similarly large internet imagery datasets.
We experiment with several popular methods on an unprecedented variety of domains.
arXiv Detail & Related papers (2020-04-24T21:18:02Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.