Semantic Concentration for Domain Adaptation
- URL: http://arxiv.org/abs/2108.05720v1
- Date: Thu, 12 Aug 2021 13:04:36 GMT
- Title: Semantic Concentration for Domain Adaptation
- Authors: Shuang Li, Mixue Xie, Fangrui Lv, Chi Harold Liu, Jian Liang, Chen
Qin, Wei Li
- Abstract summary: Domain adaptation (DA) paves the way for label annotation and dataset bias issues by the knowledge transfer from a label-rich source domain to a related but unlabeled target domain.
A mainstream of DA methods is to align the feature distributions of the two domains.
We propose Semantic Concentration for Domain Adaptation to encourage the model to concentrate on the most principal features.
- Score: 23.706231329913113
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation (DA) paves the way for label annotation and dataset bias
issues by the knowledge transfer from a label-rich source domain to a related
but unlabeled target domain. A mainstream of DA methods is to align the feature
distributions of the two domains. However, the majority of them focus on the
entire image features where irrelevant semantic information, e.g., the messy
background, is inevitably embedded. Enforcing feature alignments in such case
will negatively influence the correct matching of objects and consequently lead
to the semantically negative transfer due to the confusion of irrelevant
semantics. To tackle this issue, we propose Semantic Concentration for Domain
Adaptation (SCDA), which encourages the model to concentrate on the most
principal features via the pair-wise adversarial alignment of prediction
distributions. Specifically, we train the classifier to class-wisely maximize
the prediction distribution divergence of each sample pair, which enables the
model to find the region with large differences among the same class of
samples. Meanwhile, the feature extractor attempts to minimize that
discrepancy, which suppresses the features of dissimilar regions among the same
class of samples and accentuates the features of principal parts. As a general
method, SCDA can be easily integrated into various DA methods as a regularizer
to further boost their performance. Extensive experiments on the cross-domain
benchmarks show the efficacy of SCDA.
Related papers
- Semi Supervised Heterogeneous Domain Adaptation via Disentanglement and Pseudo-Labelling [4.33404822906643]
Semi-supervised domain adaptation methods leverage information from a source labelled domain to generalize over a scarcely labelled target domain.
Such a setting is denoted as Semi-Supervised Heterogeneous Domain Adaptation (SSHDA)
We introduce SHeDD (Semi-supervised Heterogeneous Domain Adaptation via Disentanglement) an end-to-end neural framework tailored to learning a target domain.
arXiv Detail & Related papers (2024-06-20T08:02:49Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Simultaneous Semantic Alignment Network for Heterogeneous Domain
Adaptation [67.37606333193357]
We propose aSimultaneous Semantic Alignment Network (SSAN) to simultaneously exploit correlations among categories and align the centroids for each category across domains.
By leveraging target pseudo-labels, a robust triplet-centroid alignment mechanism is explicitly applied to align feature representations for each category.
Experiments on various HDA tasks across text-to-image, image-to-image and text-to-text successfully validate the superiority of our SSAN against state-of-the-art HDA methods.
arXiv Detail & Related papers (2020-08-04T16:20:37Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.