Fisher Deep Domain Adaptation
- URL: http://arxiv.org/abs/2003.05636v1
- Date: Thu, 12 Mar 2020 06:17:48 GMT
- Title: Fisher Deep Domain Adaptation
- Authors: Yinghua Zhang, Yu Zhang, Ying Wei, Kun Bai, Yangqiu Song, Qiang Yang
- Abstract summary: Deep domain adaptation models learn a neural network in an unlabeled target domain by leveraging the knowledge from a labeled source domain.
A Fisher loss is proposed to learn discriminative representations which are within-class compact and between-class separable.
- Score: 41.50519723389471
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep domain adaptation models learn a neural network in an unlabeled target
domain by leveraging the knowledge from a labeled source domain. This can be
achieved by learning a domain-invariant feature space. Though the learned
representations are separable in the source domain, they usually have a large
variance and samples with different class labels tend to overlap in the target
domain, which yields suboptimal adaptation performance. To fill the gap, a
Fisher loss is proposed to learn discriminative representations which are
within-class compact and between-class separable. Experimental results on two
benchmark datasets show that the Fisher loss is a general and effective loss
for deep domain adaptation. Noticeable improvements are brought when it is used
together with widely adopted transfer criteria, including MMD, CORAL and domain
adversarial loss. For example, an absolute improvement of 6.67% in terms of the
mean accuracy is attained when the Fisher loss is used together with the domain
adversarial loss on the Office-Home dataset.
Related papers
- AdaptDiff: Cross-Modality Domain Adaptation via Weak Conditional Semantic Diffusion for Retinal Vessel Segmentation [10.958821619282748]
We present an unsupervised domain adaptation (UDA) method named AdaptDiff.
It enables a retinal vessel segmentation network trained on fundus photography (FP) to produce satisfactory results on unseen modalities.
Our results demonstrate a significant improvement in segmentation performance across all unseen datasets.
arXiv Detail & Related papers (2024-10-06T23:04:29Z) - Domain Adaptation and Active Learning for Fine-Grained Recognition in
the Field of Biodiversity [7.24935792316121]
unsupervised domain adaptation can be used for fine-grained recognition in a biodiversity context.
Using domain adaptation and Transferable Normalization, the accuracy of the classifier could be increased by up to 12.35 %.
Surprisingly, we found that more sophisticated strategies provide better results than the random selection baseline for only one of the two datasets.
arXiv Detail & Related papers (2021-10-22T13:34:13Z) - Fishr: Invariant Gradient Variances for Out-of-distribution
Generalization [98.40583494166314]
Fishr is a learning scheme to enforce domain invariance in the space of the gradients of the loss function.
Fishr exhibits close relations with the Fisher Information and the Hessian of the loss.
In particular, Fishr improves the state of the art on the DomainBed benchmark and performs significantly better than Empirical Risk Minimization.
arXiv Detail & Related papers (2021-09-07T08:36:09Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Re-energizing Domain Discriminator with Sample Relabeling for
Adversarial Domain Adaptation [88.86865069583149]
Unsupervised domain adaptation (UDA) methods exploit domain adversarial training to align the features to reduce domain gap.
In this work, we propose an efficient optimization strategy named Re-enforceable Adversarial Domain Adaptation (RADA)
RADA aims to re-energize the domain discriminator during the training by using dynamic domain labels.
arXiv Detail & Related papers (2021-03-22T08:32:55Z) - Unsupervised Domain Adaptation with Multiple Domain Discriminators and
Adaptive Self-Training [22.366638308792734]
Unsupervised Domain Adaptation (UDA) aims at improving the generalization capability of a model trained on a source domain to perform well on a target domain for which no labeled data is available.
We propose an approach to adapt a deep neural network trained on synthetic data to real scenes addressing the domain shift between the two different data distributions.
arXiv Detail & Related papers (2020-04-27T11:48:03Z) - Deep Residual Correction Network for Partial Domain Adaptation [79.27753273651747]
Deep domain adaptation methods have achieved appealing performance by learning transferable representations from a well-labeled source domain to a different but related unlabeled target domain.
This paper proposes an efficiently-implemented Deep Residual Correction Network (DRCN)
Comprehensive experiments on partial, traditional and fine-grained cross-domain visual recognition demonstrate that DRCN is superior to the competitive deep domain adaptation approaches.
arXiv Detail & Related papers (2020-04-10T06:07:16Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.