Deep Residual Correction Network for Partial Domain Adaptation
- URL: http://arxiv.org/abs/2004.04914v1
- Date: Fri, 10 Apr 2020 06:07:16 GMT
- Title: Deep Residual Correction Network for Partial Domain Adaptation
- Authors: Shuang Li, Chi Harold Liu, Qiuxia Lin, Qi Wen, Limin Su, Gao Huang,
Zhengming Ding
- Abstract summary: Deep domain adaptation methods have achieved appealing performance by learning transferable representations from a well-labeled source domain to a different but related unlabeled target domain.
This paper proposes an efficiently-implemented Deep Residual Correction Network (DRCN)
Comprehensive experiments on partial, traditional and fine-grained cross-domain visual recognition demonstrate that DRCN is superior to the competitive deep domain adaptation approaches.
- Score: 79.27753273651747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep domain adaptation methods have achieved appealing performance by
learning transferable representations from a well-labeled source domain to a
different but related unlabeled target domain. Most existing works assume
source and target data share the identical label space, which is often
difficult to be satisfied in many real-world applications. With the emergence
of big data, there is a more practical scenario called partial domain
adaptation, where we are always accessible to a more large-scale source domain
while working on a relative small-scale target domain. In this case, the
conventional domain adaptation assumption should be relaxed, and the target
label space tends to be a subset of the source label space. Intuitively,
reinforcing the positive effects of the most relevant source subclasses and
reducing the negative impacts of irrelevant source subclasses are of vital
importance to address partial domain adaptation challenge. This paper proposes
an efficiently-implemented Deep Residual Correction Network (DRCN) by plugging
one residual block into the source network along with the task-specific feature
layer, which effectively enhances the adaptation from source to target and
explicitly weakens the influence from the irrelevant source classes.
Specifically, the plugged residual block, which consists of several
fully-connected layers, could deepen basic network and boost its feature
representation capability correspondingly. Moreover, we design a weighted
class-wise domain alignment loss to couple two domains by matching the feature
distributions of shared classes between source and target. Comprehensive
experiments on partial, traditional and fine-grained cross-domain visual
recognition demonstrate that DRCN is superior to the competitive deep domain
adaptation approaches.
Related papers
- Reducing Source-Private Bias in Extreme Universal Domain Adaptation [11.875619863954238]
Universal Domain Adaptation (UniDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We show that state-of-the-art methods struggle when the source domain has significantly more non-overlapping classes than overlapping ones.
We propose using self-supervised learning to preserve the structure of the target data.
arXiv Detail & Related papers (2024-10-15T04:51:37Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Domain Adaptive Semantic Segmentation without Source Data [50.18389578589789]
We investigate domain adaptive semantic segmentation without source data, which assumes that the model is pre-trained on the source domain.
We propose an effective framework for this challenging problem with two components: positive learning and negative learning.
Our framework can be easily implemented and incorporated with other methods to further enhance the performance.
arXiv Detail & Related papers (2021-10-13T04:12:27Z) - Multilevel Knowledge Transfer for Cross-Domain Object Detection [26.105283273950942]
Domain shift is a well known problem where a model trained on a particular domain (source) does not perform well when exposed to samples from a different domain (target)
In this work, we address the domain shift problem for the object detection task.
Our approach relies on gradually removing the domain shift between the source and the target domains.
arXiv Detail & Related papers (2021-08-02T15:24:40Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Physically-Constrained Transfer Learning through Shared Abundance Space
for Hyperspectral Image Classification [14.840925517957258]
We propose a new transfer learning scheme to bridge the gap between the source and target domains.
The proposed method is referred to as physically-constrained transfer learning through shared abundance space.
arXiv Detail & Related papers (2020-08-19T17:41:37Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.