Domain Balancing: Face Recognition on Long-Tailed Domains
- URL: http://arxiv.org/abs/2003.13791v1
- Date: Mon, 30 Mar 2020 20:16:31 GMT
- Title: Domain Balancing: Face Recognition on Long-Tailed Domains
- Authors: Dong Cao, Xiangyu Zhu, Xingyu Huang, Jianzhu Guo, Zhen Lei
- Abstract summary: We propose a novel Domain Balancing (DB) mechanism to handle the long-tailed domain distribution problem.
In this paper, we first propose a Domain Frequency Indicator (DFI) to judge whether a sample is from head domains or tail domains.
Secondly, we formulate a light-weighted Residual Balancing Mapping (RBM) block to balance the domain distribution by adjusting the network according to DFI.
Finally, we propose a Domain Balancing Margin (DBM) in the loss function to further optimize the feature space of the tail domains to improve generalization.
- Score: 49.4688709764188
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Long-tailed problem has been an important topic in face recognition task.
However, existing methods only concentrate on the long-tailed distribution of
classes. Differently, we devote to the long-tailed domain distribution problem,
which refers to the fact that a small number of domains frequently appear while
other domains far less existing. The key challenge of the problem is that
domain labels are too complicated (related to race, age, pose, illumination,
etc.) and inaccessible in real applications. In this paper, we propose a novel
Domain Balancing (DB) mechanism to handle this problem. Specifically, we first
propose a Domain Frequency Indicator (DFI) to judge whether a sample is from
head domains or tail domains. Secondly, we formulate a light-weighted Residual
Balancing Mapping (RBM) block to balance the domain distribution by adjusting
the network according to DFI. Finally, we propose a Domain Balancing Margin
(DBM) in the loss function to further optimize the feature space of the tail
domains to improve generalization. Extensive analysis and experiments on
several face recognition benchmarks demonstrate that the proposed method
effectively enhances the generalization capacities and achieves superior
performance.
Related papers
- Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - DomainDrop: Suppressing Domain-Sensitive Channels for Domain
Generalization [25.940491294232956]
DomainDrop is a framework to continuously enhance the channel robustness to domain shifts.
Our framework achieves state-of-the-art performance compared to other competing methods.
arXiv Detail & Related papers (2023-08-20T14:48:52Z) - ReMask: A Robust Information-Masking Approach for Domain Counterfactual
Generation [16.275230631985824]
Domain counterfactual generation aims to transform a text from the source domain to a given target domain.
We employ a three-step domain obfuscation approach that involves frequency and attention norm-based masking, to mask domain-specific cues, and unmasking to regain the domain generic context.
Our model outperforms the state-of-the-art by achieving 1.4% average accuracy improvement in the adversarial domain adaptation setting.
arXiv Detail & Related papers (2023-05-04T14:19:02Z) - Delving into the Continuous Domain Adaptation [12.906272389564593]
Existing domain adaptation methods assume that domain discrepancies are caused by a few discrete attributes and variations.
We argue that this is not realistic as it is implausible to define the real-world datasets using a few discrete attributes.
We propose to investigate a new problem namely the Continuous Domain Adaptation.
arXiv Detail & Related papers (2022-08-28T02:32:25Z) - FRIDA -- Generative Feature Replay for Incremental Domain Adaptation [34.00059350161178]
We propose a novel framework called Feature based Incremental Domain Adaptation (FRIDA)
For domain alignment, we propose a simple extension of the popular domain adversarial neural network (DANN) called DANN-IB.
Experiment results on Office-Home, Office-CalTech, and DomainNet datasets confirm that FRIDA maintains superior stability-plasticity trade-off than the literature.
arXiv Detail & Related papers (2021-12-28T22:24:32Z) - Self-Adversarial Disentangling for Specific Domain Adaptation [52.1935168534351]
Domain adaptation aims to bridge the domain shifts between the source and target domains.
Recent methods typically do not consider explicit prior knowledge on a specific dimension.
arXiv Detail & Related papers (2021-08-08T02:36:45Z) - Heuristic Domain Adaptation [105.59792285047536]
Heuristic Domain Adaptation Network (HDAN) explicitly learns the domain-invariant and domain-specific representations.
Heuristic Domain Adaptation Network (HDAN) has exceeded state-of-the-art on unsupervised DA, multi-source DA and semi-supervised DA.
arXiv Detail & Related papers (2020-11-30T04:21:35Z) - Discrepancy Minimization in Domain Generalization with Generative
Nearest Neighbors [13.047289562445242]
Domain generalization (DG) deals with the problem of domain shift where a machine learning model trained on multiple-source domains fail to generalize well on a target domain with different statistics.
Multiple approaches have been proposed to solve the problem of domain generalization by learning domain invariant representations across the source domains that fail to guarantee generalization on the shifted target domain.
We propose a Generative Nearest Neighbor based Discrepancy Minimization (GNNDM) method which provides a theoretical guarantee that is upper bounded by the error in the labeling process of the target.
arXiv Detail & Related papers (2020-07-28T14:54:25Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.