Joint Distribution Alignment via Adversarial Learning for Domain
Adaptive Object Detection
- URL: http://arxiv.org/abs/2109.09033v1
- Date: Sun, 19 Sep 2021 00:27:08 GMT
- Title: Joint Distribution Alignment via Adversarial Learning for Domain
Adaptive Object Detection
- Authors: Bo Zhang, Tao Chen, Bin Wang, Ruoyao Li
- Abstract summary: Unsupervised domain adaptive object detection aims to adapt a well-trained detector from its original source domain with rich labeled data to a new target domain with unlabeled data.
Recently, mainstream approaches perform this task through adversarial learning, yet still suffer from two limitations.
We propose a joint adaptive detection framework (JADF) to address the above challenges.
- Score: 11.262560426527818
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptive object detection aims to adapt a well-trained
detector from its original source domain with rich labeled data to a new target
domain with unlabeled data. Recently, mainstream approaches perform this task
through adversarial learning, yet still suffer from two limitations. First,
they mainly align marginal distribution by unsupervised cross-domain feature
matching, and ignore each feature's categorical and positional information that
can be exploited for conditional alignment; Second, they treat all classes as
equally important for transferring cross-domain knowledge and ignore that
different classes usually have different transferability. In this paper, we
propose a joint adaptive detection framework (JADF) to address the above
challenges. First, an end-to-end joint adversarial adaptation framework for
object detection is proposed, which aligns both marginal and conditional
distributions between domains without introducing any extra hyperparameter.
Next, to consider the transferability of each object class, a metric for
class-wise transferability assessment is proposed, which is incorporated into
the JADF objective for domain adaptation. Further, an extended study from
unsupervised domain adaptation (UDA) to unsupervised few-shot domain adaptation
(UFDA) is conducted, where only a few unlabeled training images are available
in unlabeled target domain. Extensive experiments validate that JADF is
effective in both the UDA and UFDA settings, achieving significant performance
gains over existing state-of-the-art cross-domain detection methods.
Related papers
- Reducing Source-Private Bias in Extreme Universal Domain Adaptation [11.875619863954238]
Universal Domain Adaptation (UniDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
We show that state-of-the-art methods struggle when the source domain has significantly more non-overlapping classes than overlapping ones.
We propose using self-supervised learning to preserve the structure of the target data.
arXiv Detail & Related papers (2024-10-15T04:51:37Z) - DATR: Unsupervised Domain Adaptive Detection Transformer with Dataset-Level Adaptation and Prototypical Alignment [7.768332621617199]
We introduce a strong DETR-based detector named Domain Adaptive detection TRansformer ( DATR) for unsupervised domain adaptation of object detection.
Our proposed DATR incorporates a mean-teacher based self-training framework, utilizing pseudo-labels generated by the teacher model to further mitigate domain bias.
Experiments demonstrate superior performance and generalization capabilities of our proposed DATR in multiple domain adaptation scenarios.
arXiv Detail & Related papers (2024-05-20T03:48:45Z) - Making the Best of Both Worlds: A Domain-Oriented Transformer for
Unsupervised Domain Adaptation [31.150256154504696]
Unsupervised Domain Adaptation (UDA) has propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains.
Most UDA approaches align features within a common embedding space and apply a shared classifier for target prediction.
We propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier.
arXiv Detail & Related papers (2022-08-02T01:38:37Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation [1.2691047660244335]
Unsupervised Domain Adaptation (UDA) aims to align the labeled source distribution with the unlabeled target distribution to obtain domain invariant predictive models.
We propose Contrastive Learning framework for semi-supervised Domain Adaptation (CLDA) that attempts to bridge the intra-domain gap.
CLDA achieves state-of-the-art results on all the above datasets.
arXiv Detail & Related papers (2021-06-30T20:23:19Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.