Reciprocal Normalization for Domain Adaptation
- URL: http://arxiv.org/abs/2112.10474v1
- Date: Mon, 20 Dec 2021 12:17:22 GMT
- Title: Reciprocal Normalization for Domain Adaptation
- Authors: Zhiyong Huang, Kekai Sheng, Ke Li, Jian Liang, Taiping Yao, Weiming
Dong, Dengwen Zhou, Xing Sun
- Abstract summary: Batch normalization (BN) is widely used in modern deep neural networks.
We propose a novel normalization method, Reciprocal Normalization (RN)
RN is more suitable for UDA problems and can be easily integrated into popular domain adaptation methods.
- Score: 31.293016830229313
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Batch normalization (BN) is widely used in modern deep neural networks, which
has been shown to represent the domain-related knowledge, and thus is
ineffective for cross-domain tasks like unsupervised domain adaptation (UDA).
Existing BN variant methods aggregate source and target domain knowledge in the
same channel in normalization module. However, the misalignment between the
features of corresponding channels across domains often leads to a sub-optimal
transferability. In this paper, we exploit the cross-domain relation and
propose a novel normalization method, Reciprocal Normalization (RN).
Specifically, RN first presents a Reciprocal Compensation (RC) module to
acquire the compensatory for each channel in both domains based on the
cross-domain channel-wise correlation. Then RN develops a Reciprocal
Aggregation (RA) module to adaptively aggregate the feature with its
cross-domain compensatory components. As an alternative to BN, RN is more
suitable for UDA problems and can be easily integrated into popular domain
adaptation methods. Experiments show that the proposed RN outperforms existing
normalization counterparts by a large margin and helps state-of-the-art
adaptation approaches achieve better results. The source code is available on
https://github.com/Openning07/reciprocal-normalization-for-DA.
Related papers
- MLNet: Mutual Learning Network with Neighborhood Invariance for
Universal Domain Adaptation [70.62860473259444]
Universal domain adaptation (UniDA) is a practical but challenging problem.
Existing UniDA methods may suffer from the problems of overlooking intra-domain variations in the target domain.
We propose a novel Mutual Learning Network (MLNet) with neighborhood invariance for UniDA.
arXiv Detail & Related papers (2023-12-13T03:17:34Z) - Unsupervised Domain Adaptation via Domain-Adaptive Diffusion [31.802163238282343]
Unsupervised Domain Adaptation (UDA) is quite challenging due to the large distribution discrepancy between the source domain and the target domain.
Inspired by diffusion models which have strong capability to gradually convert data distributions across a large gap, we consider to explore the diffusion technique to handle the challenging UDA task.
Our method outperforms the current state-of-the-arts by a large margin on three widely used UDA datasets.
arXiv Detail & Related papers (2023-08-26T14:28:18Z) - Domain Generalization through the Lens of Angular Invariance [44.76809026901016]
Domain generalization (DG) aims at generalizing a classifier trained on multiple source domains to an unseen target domain with domain shift.
We propose a novel deep DG method called Angular Invariance Domain Generalization Network (AIDGN)
arXiv Detail & Related papers (2022-10-28T02:05:38Z) - Generalizable Person Re-Identification via Self-Supervised Batch Norm
Test-Time Adaption [63.7424680360004]
Batch Norm Test-time Adaption (BNTA) is a novel re-id framework that applies the self-supervised strategy to update BN parameters adaptively.
BNTA explores the domain-aware information within unlabeled target data before inference, and accordingly modulates the feature distribution normalized by BN to adapt to the target domain.
arXiv Detail & Related papers (2022-03-01T18:46:32Z) - Adversarially Adaptive Normalization for Single Domain Generalization [71.80587939738672]
We propose a generic normalization approach, adaptive standardization and rescaling normalization (ASR-Norm)
ASR-Norm learns both the standardization and rescaling statistics via neural networks.
We show that ASR-Norm can bring consistent improvement to the state-of-the-art ADA approaches.
arXiv Detail & Related papers (2021-06-01T23:58:23Z) - Deep Domain Generalization with Feature-norm Network [33.84004077585957]
We introduce an end-to-end feature-norm network (FNN) which is robust to negative transfer.
We also introduce a collaborative feature-norm network (CFNN) to further improve the capability of FNN.
arXiv Detail & Related papers (2021-04-28T06:13:47Z) - Improving Unsupervised Domain Adaptation by Reducing Bi-level Feature
Redundancy [14.94720207222332]
In this paper, we emphasize the significance of reducing feature redundancy for improving UDA in a bi-level way.
For the first level, we try to ensure compact domain-specific features with a transferable decorrelated normalization module.
In the second level, domain-invariant feature redundancy caused by domain-shared representation is further mitigated via an alternative brandity.
arXiv Detail & Related papers (2020-12-28T08:00:56Z) - Channel-wise Alignment for Adaptive Object Detection [66.76486843397267]
Generic object detection has been immensely promoted by the development of deep convolutional neural networks.
Existing methods on this task usually draw attention on the high-level alignment based on the whole image or object of interest.
In this paper, we realize adaptation from a thoroughly different perspective, i.e., channel-wise alignment.
arXiv Detail & Related papers (2020-09-07T02:42:18Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.