D3T: Distinctive Dual-Domain Teacher Zigzagging Across RGB-Thermal Gap for Domain-Adaptive Object Detection
- URL: http://arxiv.org/abs/2403.09359v1
- Date: Thu, 14 Mar 2024 13:05:43 GMT
- Title: D3T: Distinctive Dual-Domain Teacher Zigzagging Across RGB-Thermal Gap for Domain-Adaptive Object Detection
- Authors: Dinh Phat Do, Taehoon Kim, Jaemin Na, Jiwon Kim, Keonho Lee, Kyunghwan Cho, Wonjun Hwang,
- Abstract summary: Domain adaptation for object detection typically entails transferring knowledge from one visible domain to another visible domain.
We propose a Distinctive Dual-Domain Teacher (D3T) framework that employs distinct training paradigms for each domain.
We validate our method through newly designed experimental protocols with well-known thermal datasets.
- Score: 15.071470389431672
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Domain adaptation for object detection typically entails transferring knowledge from one visible domain to another visible domain. However, there are limited studies on adapting from the visible to the thermal domain, because the domain gap between the visible and thermal domains is much larger than expected, and traditional domain adaptation can not successfully facilitate learning in this situation. To overcome this challenge, we propose a Distinctive Dual-Domain Teacher (D3T) framework that employs distinct training paradigms for each domain. Specifically, we segregate the source and target training sets for building dual-teachers and successively deploy exponential moving average to the student model to individual teachers of each domain. The framework further incorporates a zigzag learning method between dual teachers, facilitating a gradual transition from the visible to thermal domains during training. We validate the superiority of our method through newly designed experimental protocols with well-known thermal datasets, i.e., FLIR and KAIST. Source code is available at https://github.com/EdwardDo69/D3T .
Related papers
- Direct Distillation between Different Domains [97.39470334253163]
We propose a new one-stage method dubbed Direct Distillation between Different Domains" (4Ds)
We first design a learnable adapter based on the Fourier transform to separate the domain-invariant knowledge from the domain-specific knowledge.
We then build a fusion-activation mechanism to transfer the valuable domain-invariant knowledge to the student network.
arXiv Detail & Related papers (2024-01-12T02:48:51Z) - Domain-Scalable Unpaired Image Translation via Latent Space Anchoring [88.7642967393508]
Unpaired image-to-image translation (UNIT) aims to map images between two visual domains without paired training data.
We propose a new domain-scalable UNIT method, termed as latent space anchoring.
Our method anchors images of different domains to the same latent space of frozen GANs by learning lightweight encoder and regressor models.
In the inference phase, the learned encoders and decoders of different domains can be arbitrarily combined to translate images between any two domains without fine-tuning.
arXiv Detail & Related papers (2023-06-26T17:50:02Z) - Meta-DMoE: Adapting to Domain Shift by Meta-Distillation from
Mixture-of-Experts [33.21435044949033]
Most existing methods perform training on multiple source domains using a single model.
We propose a novel framework for unsupervised test-time adaptation, which is formulated as a knowledge distillation process.
arXiv Detail & Related papers (2022-10-08T02:28:10Z) - ML-BPM: Multi-teacher Learning with Bidirectional Photometric Mixing for
Open Compound Domain Adaptation in Semantic Segmentation [78.19743899703052]
Open compound domain adaptation (OCDA) considers the target domain as the compound of multiple unknown homogeneous.
We introduce a multi-teacher framework with bidirectional photometric mixing to adapt to every target subdomain.
We conduct an adaptive distillation to learn a student model and apply consistency regularization to improve the student generalization.
arXiv Detail & Related papers (2022-07-19T03:30:48Z) - Co-Teaching for Unsupervised Domain Adaptation and Expansion [12.455364571022576]
Unsupervised Domain Adaptation (UDA) essentially trades a model's performance on a source domain for improving its performance on a target domain.
UDE tries to adapt the model for the target domain as UDA does, and in the meantime maintains its source-domain performance.
In both UDA and UDE settings, a model tailored to a given domain, let it be the source or the target domain, is assumed to well handle samples from the given domain.
We exploit this finding, and accordingly propose Co-Teaching (CT)
arXiv Detail & Related papers (2022-04-04T02:34:26Z) - An Unsupervised Domain Adaptive Approach for Multimodal 2D Object
Detection in Adverse Weather Conditions [5.217255784808035]
We propose an unsupervised domain adaptation framework to bridge the domain gap between source and target domains.
We use a data augmentation scheme that simulates weather distortions to add domain confusion and prevent overfitting on the source data.
Experiments performed on the DENSE dataset show that our method can substantially alleviate the domain gap.
arXiv Detail & Related papers (2022-03-07T18:10:40Z) - Student Become Decathlon Master in Retinal Vessel Segmentation via
Dual-teacher Multi-target Domain Adaptation [1.121358474059223]
We propose RVms, a novel unsupervised multi-target domain adaptation approach to segment retinal vessels (RVs) from multimodal and multicenter retinal images.
RVms is found to be very close to the target-trained Oracle in terms of segmenting the RVs, largely outperforming other state-of-the-art methods.
arXiv Detail & Related papers (2022-03-07T02:20:14Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Dual-Teacher++: Exploiting Intra-domain and Inter-domain Knowledge with
Reliable Transfer for Cardiac Segmentation [69.09432302497116]
We propose a cutting-edge semi-supervised domain adaptation framework, namely Dual-Teacher++.
We design novel dual teacher models, including an inter-domain teacher model to explore cross-modality priors from source domain (e.g., MR) and an intra-domain teacher model to investigate the knowledge beneath unlabeled target domain.
In this way, the student model can obtain reliable dual-domain knowledge and yield improved performance on target domain data.
arXiv Detail & Related papers (2021-01-07T05:17:38Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.