CHATTY: Coupled Holistic Adversarial Transport Terms with Yield for
Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2304.09623v2
- Date: Thu, 20 Apr 2023 16:39:43 GMT
- Title: CHATTY: Coupled Holistic Adversarial Transport Terms with Yield for
Unsupervised Domain Adaptation
- Authors: Chirag P, Mukta Wagle, Ravi Kant Gupta, Pranav Jeevan, Amit Sethi
- Abstract summary: We propose a new technique called CHATTY: Coupled Holistic Adversarial Transport Terms with Yield for Unsupervised Domain Adaptation.
Adversarial training is commonly used for learning domain-invariant representations by reversing the gradients from a domain discriminator head to train the feature extractor layers of a neural network.
We introduce a sub-network which displaces the outputs of the source and target domain samples in a learnable manner.
- Score: 1.87446486236017
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a new technique called CHATTY: Coupled Holistic Adversarial
Transport Terms with Yield for Unsupervised Domain Adaptation. Adversarial
training is commonly used for learning domain-invariant representations by
reversing the gradients from a domain discriminator head to train the feature
extractor layers of a neural network. We propose significant modifications to
the adversarial head, its training objective, and the classifier head. With the
aim of reducing class confusion, we introduce a sub-network which displaces the
classifier outputs of the source and target domain samples in a learnable
manner. We control this movement using a novel transport loss that spreads
class clusters away from each other and makes it easier for the classifier to
find the decision boundaries for both the source and target domains. The
results of adding this new loss to a careful selection of previously proposed
losses leads to improvement in UDA results compared to the previous
state-of-the-art methods on benchmark datasets. We show the importance of the
proposed loss term using ablation studies and visualization of the movement of
target domain sample in representation space.
Related papers
- Adversarial Semi-Supervised Domain Adaptation for Semantic Segmentation:
A New Role for Labeled Target Samples [7.199108088621308]
We design new training objective losses for cases when labeled target data behave as source samples or as real target samples.
To support our approach, we consider a complementary method that mixes source and labeled target data, then applies the same adaptation process.
We illustrate our findings through extensive experiments on the benchmarks GTA5, SYNTHIA, and Cityscapes.
arXiv Detail & Related papers (2023-12-12T15:40:22Z) - Self-training through Classifier Disagreement for Cross-Domain Opinion
Target Extraction [62.41511766918932]
Opinion target extraction (OTE) or aspect extraction (AE) is a fundamental task in opinion mining.
Recent work focus on cross-domain OTE, which is typically encountered in real-world scenarios.
We propose a new SSL approach that opts for selecting target samples whose model output from a domain-specific teacher and student network disagrees on the unlabelled target data.
arXiv Detail & Related papers (2023-02-28T16:31:17Z) - Labeling Where Adapting Fails: Cross-Domain Semantic Segmentation with
Point Supervision via Active Selection [81.703478548177]
Training models dedicated to semantic segmentation require a large amount of pixel-wise annotated data.
Unsupervised domain adaptation approaches aim at aligning the feature distributions between the labeled source and the unlabeled target data.
Previous works attempted to include human interactions in this process under the form of sparse single-pixel annotations in the target data.
We propose a new domain adaptation framework for semantic segmentation with annotated points via active selection.
arXiv Detail & Related papers (2022-06-01T01:52:28Z) - Unsupervised Domain Adaptation for Retinal Vessel Segmentation with
Adversarial Learning and Transfer Normalization [22.186070895966022]
We propose an entropy-based adversarial learning strategy to reduce the distribution discrepancy between source and target domains.
A new transfer normalization layer is proposed to further boost the transferability of the deep network.
Our approach yields significant performance gains compared to other state-of-the-art methods.
arXiv Detail & Related papers (2021-08-04T02:45:37Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Re-energizing Domain Discriminator with Sample Relabeling for
Adversarial Domain Adaptation [88.86865069583149]
Unsupervised domain adaptation (UDA) methods exploit domain adversarial training to align the features to reduce domain gap.
In this work, we propose an efficient optimization strategy named Re-enforceable Adversarial Domain Adaptation (RADA)
RADA aims to re-energize the domain discriminator during the training by using dynamic domain labels.
arXiv Detail & Related papers (2021-03-22T08:32:55Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Domain Adaptation in LiDAR Semantic Segmentation by Aligning Class
Distributions [9.581605678437032]
This work addresses the problem of unsupervised domain adaptation for LiDAR semantic segmentation models.
Our approach combines novel ideas on top of the current state-of-the-art approaches and yields new state-of-the-art results.
arXiv Detail & Related papers (2020-10-23T08:52:15Z) - Unsupervised Cross-domain Image Classification by Distance Metric Guided
Feature Alignment [11.74643883335152]
Unsupervised domain adaptation is a promising avenue which transfers knowledge from a source domain to a target domain.
We propose distance metric guided feature alignment (MetFA) to extract discriminative as well as domain-invariant features on both source and target domains.
Our model integrates class distribution alignment to transfer semantic knowledge from a source domain to a target domain.
arXiv Detail & Related papers (2020-08-19T13:36:57Z) - Supervised Domain Adaptation using Graph Embedding [86.3361797111839]
Domain adaptation methods assume that distributions between the two domains are shifted and attempt to realign them.
We propose a generic framework based on graph embedding.
We show that the proposed approach leads to a powerful Domain Adaptation framework.
arXiv Detail & Related papers (2020-03-09T12:25:13Z) - Unsupervised Domain Adaptive Object Detection using Forward-Backward
Cyclic Adaptation [13.163271874039191]
We present a novel approach to perform the unsupervised domain adaptation for object detection through forward-backward cyclic (FBC) training.
Recent adversarial training based domain adaptation methods have shown their effectiveness on minimizing domain discrepancy via marginal feature distributions alignment.
We propose Forward-Backward Cyclic Adaptation, which iteratively computes adaptation from source to target via backward hopping and from target to source via forward passing.
arXiv Detail & Related papers (2020-02-03T06:24:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.