Feature Alignment and Restoration for Domain Generalization and
Adaptation
- URL: http://arxiv.org/abs/2006.12009v1
- Date: Mon, 22 Jun 2020 05:08:13 GMT
- Title: Feature Alignment and Restoration for Domain Generalization and
Adaptation
- Authors: Xin Jin, Cuiling Lan, Wenjun Zeng, Zhibo Chen
- Abstract summary: Cross domain feature alignment has been widely explored to pull the feature distributions of different domains in order to learn domain-invariant representations.
We propose a unified framework termed Feature Alignment and Restoration (FAR) to simultaneously ensure high generalization and discrimination power of the networks.
Experiments on multiple classification benchmarks demonstrate the high performance and strong generalization of our FAR framework for both domain generalization and unsupervised domain adaptation.
- Score: 93.39253443415392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For domain generalization (DG) and unsupervised domain adaptation (UDA),
cross domain feature alignment has been widely explored to pull the feature
distributions of different domains in order to learn domain-invariant
representations. However, the feature alignment is in general task-ignorant and
could result in degradation of the discrimination power of the feature
representation and thus hinders the high performance. In this paper, we propose
a unified framework termed Feature Alignment and Restoration (FAR) to
simultaneously ensure high generalization and discrimination power of the
networks for effective DG and UDA. Specifically, we perform feature alignment
(FA) across domains by aligning the moments of the distributions of attentively
selected features to reduce their discrepancy. To ensure high discrimination,
we propose a Feature Restoration (FR) operation to distill task-relevant
features from the residual information and use them to compensate for the
aligned features. For better disentanglement, we enforce a dual ranking entropy
loss constraint in the FR step to encourage the separation of task-relevant and
task-irrelevant features. Extensive experiments on multiple classification
benchmarks demonstrate the high performance and strong generalization of our
FAR framework for both domain generalization and unsupervised domain
adaptation.
Related papers
- Towards Domain-Specific Features Disentanglement for Domain
Generalization [23.13095840134744]
We propose a novel contrastive-based disentanglement method CDDG to exploit the over-looked domain-specific features.
Specifically, CDDG learns to decouple inherent mutually exclusive features by leveraging them in the latent space.
Experiments conducted on various benchmark datasets demonstrate the superiority of our method compared to other state-of-the-art approaches.
arXiv Detail & Related papers (2023-10-04T17:51:02Z) - AIR-DA: Adversarial Image Reconstruction for Unsupervised Domain
Adaptive Object Detection [28.22783703278792]
Adrial Image Reconstruction (AIR) as the regularizer to facilitate the adversarial training of the feature extractor.
Our evaluations across several datasets of challenging domain shifts demonstrate that the proposed method outperforms all previous methods.
arXiv Detail & Related papers (2023-03-27T16:51:51Z) - Domain generalization Person Re-identification on Attention-aware
multi-operation strategery [8.90472129039969]
Domain generalization person re-identification (DG Re-ID) aims to directly deploy a model trained on the source domain to the unseen target domain with good generalization.
In the existing DG Re-ID methods, invariant operations are effective in extracting domain generalization features.
An Attention-aware Multi-operation Strategery (AMS) for DG Re-ID is proposed to extract more generalized features.
arXiv Detail & Related papers (2022-10-19T09:18:46Z) - Calibrated Feature Decomposition for Generalizable Person
Re-Identification [82.64133819313186]
Calibrated Feature Decomposition (CFD) module focuses on improving the generalization capacity for person re-identification.
A calibrated-and-standardized Batch normalization (CSBN) is designed to learn calibrated person representation.
arXiv Detail & Related papers (2021-11-27T17:12:43Z) - ToAlign: Task-oriented Alignment for Unsupervised Domain Adaptation [84.90801699807426]
We study what features should be aligned across domains and propose to make the domain alignment proactively serve classification.
We explicitly decompose a feature in the source domain intoa task-related/discriminative feature that should be aligned, and a task-irrelevant feature that should be avoided/ignored.
arXiv Detail & Related papers (2021-06-21T02:17:48Z) - Disentanglement-based Cross-Domain Feature Augmentation for Effective
Unsupervised Domain Adaptive Person Re-identification [87.72851934197936]
Unsupervised domain adaptive (UDA) person re-identification (ReID) aims to transfer the knowledge from the labeled source domain to the unlabeled target domain for person matching.
One challenge is how to generate target domain samples with reliable labels for training.
We propose a Disentanglement-based Cross-Domain Feature Augmentation strategy.
arXiv Detail & Related papers (2021-03-25T15:28:41Z) - Re-energizing Domain Discriminator with Sample Relabeling for
Adversarial Domain Adaptation [88.86865069583149]
Unsupervised domain adaptation (UDA) methods exploit domain adversarial training to align the features to reduce domain gap.
In this work, we propose an efficient optimization strategy named Re-enforceable Adversarial Domain Adaptation (RADA)
RADA aims to re-energize the domain discriminator during the training by using dynamic domain labels.
arXiv Detail & Related papers (2021-03-22T08:32:55Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.