Domain Agnostic Learning for Unbiased Authentication
- URL: http://arxiv.org/abs/2010.05250v2
- Date: Mon, 23 Nov 2020 09:13:33 GMT
- Title: Domain Agnostic Learning for Unbiased Authentication
- Authors: Jian Liang, Yuren Cao, Shuang Li, Bing Bai, Hao Li, Fei Wang, Kun Bai
- Abstract summary: We propose a domain-agnostic method that eliminates domain-difference without domain labels.
latent domains are discovered by learning the heterogeneous predictive relationships between inputs and outputs.
We extend our method to a meta-learning framework to pursue more thorough domain-difference elimination.
- Score: 47.85174796247398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Authentication is the task of confirming the matching relationship between a
data instance and a given identity. Typical examples of authentication problems
include face recognition and person re-identification. Data-driven
authentication could be affected by undesired biases, i.e., the models are
often trained in one domain (e.g., for people wearing spring outfits) while
applied in other domains (e.g., they change the clothes to summer outfits).
Previous works have made efforts to eliminate domain-difference. They typically
assume domain annotations are provided, and all the domains share classes.
However, for authentication, there could be a large number of domains shared by
different identities/classes, and it is impossible to annotate these domains
exhaustively. It could make domain-difference challenging to model and
eliminate. In this paper, we propose a domain-agnostic method that eliminates
domain-difference without domain labels. We alternately perform latent domain
discovery and domain-difference elimination until our model no longer detects
domain-difference. In our approach, the latent domains are discovered by
learning the heterogeneous predictive relationships between inputs and outputs.
Then domain-difference is eliminated in both class-dependent and
class-independent spaces to improve robustness of elimination. We further
extend our method to a meta-learning framework to pursue more thorough
domain-difference elimination. Comprehensive empirical evaluation results are
provided to demonstrate the effectiveness and superiority of our proposed
method.
Related papers
- Domain Generalization via Causal Adjustment for Cross-Domain Sentiment
Analysis [59.73582306457387]
We focus on the problem of domain generalization for cross-domain sentiment analysis.
We propose a backdoor adjustment-based causal model to disentangle the domain-specific and domain-invariant representations.
A series of experiments show the great performance and robustness of our model.
arXiv Detail & Related papers (2024-02-22T13:26:56Z) - A Unified Causal View of Domain Invariant Representation Learning [19.197022592928164]
Machine learning methods can be unreliable when deployed in domains that differ from the domains on which they were trained.
This paper shows how the different methods relate to each other, and clarify the real-world circumstances under which each is expected to succeed.
arXiv Detail & Related papers (2022-08-15T03:08:58Z) - Domain Generalization via Selective Consistency Regularization for Time
Series Classification [16.338176636365752]
Domain generalization methods aim to learn models robust to domain shift with data from a limited number of source domains.
We propose a novel representation learning methodology that selectively enforces prediction consistency between source domains.
arXiv Detail & Related papers (2022-06-16T01:57:35Z) - Domain-Class Correlation Decomposition for Generalizable Person
Re-Identification [34.813965300584776]
In person re-identification, the domain and class are correlated.
We show that domain adversarial learning will lose certain information about class due to this domain-class correlation.
Our model outperforms the state-of-the-art methods on the large-scale domain generalization Re-ID benchmark.
arXiv Detail & Related papers (2021-06-29T09:45:03Z) - Learning to Share by Masking the Non-shared for Multi-domain Sentiment
Classification [24.153584996936424]
We propose a network which explicitly masks domain-related words from texts, learns domain-invariant sentiment features from these domain-agnostic texts, and uses those masked words to form domain-aware sentence representations.
Empirical experiments on a well-adopted multiple domain sentiment classification dataset demonstrate the effectiveness of our proposed model.
arXiv Detail & Related papers (2021-04-17T08:15:29Z) - Prototypical Cross-domain Self-supervised Learning for Few-shot
Unsupervised Domain Adaptation [91.58443042554903]
We propose an end-to-end Prototypical Cross-domain Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain Adaptation (FUDA)
PCS not only performs cross-domain low-level feature alignment, but it also encodes and aligns semantic structures in the shared embedding space across domains.
Compared with state-of-the-art methods, PCS improves the mean classification accuracy over different domain pairs on FUDA by 10.5%, 3.5%, 9.0%, and 13.2% on Office, Office-Home, VisDA-2017, and DomainNet, respectively.
arXiv Detail & Related papers (2021-03-31T02:07:42Z) - Domain Generalization in Biosignal Classification [37.70077538403524]
This study is the first to investigate domain generalization for biosignal data.
Our proposed method achieves accuracy gains of up to 16% for four completely unseen domains.
arXiv Detail & Related papers (2020-11-12T05:15:46Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z) - Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation [105.96860932833759]
State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue.
We propose to improve the semantic-level alignment with different strategies for stuff regions and for things.
In addition to our proposed method, we show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains.
arXiv Detail & Related papers (2020-03-18T04:43:25Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.