Towards Online Domain Adaptive Object Detection
- URL: http://arxiv.org/abs/2204.05289v1
- Date: Mon, 11 Apr 2022 17:47:22 GMT
- Title: Towards Online Domain Adaptive Object Detection
- Authors: Vibashan VS, Poojan Oza and Vishal M. Patel
- Abstract summary: Existing object detection models assume both the training and test data are sampled from the same source domain.
We propose a novel unified adaptation framework that adapts and improves generalization on the target domain in online settings.
- Score: 79.89082006155135
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Existing object detection models assume both the training and test data are
sampled from the same source domain. This assumption does not hold true when
these detectors are deployed in real-world applications, where they encounter
new visual domain. Unsupervised Domain Adaptation (UDA) methods are generally
employed to mitigate the adverse effects caused by domain shift. Existing UDA
methods operate in an offline manner where the model is first adapted towards
the target domain and then deployed in real-world applications. However, this
offline adaptation strategy is not suitable for real-world applications as the
model frequently encounters new domain shifts. Hence, it becomes critical to
develop a feasible UDA method that generalizes to these domain shifts
encountered during deployment time in a continuous online manner. To this end,
we propose a novel unified adaptation framework that adapts and improves
generalization on the target domain in online settings. In particular, we
introduce MemXformer - a cross-attention transformer-based memory module where
items in the memory take advantage of domain shifts and record prototypical
patterns of the target distribution. Further, MemXformer produces strong
positive and negative pairs to guide a novel contrastive loss, which enhances
target specific representation learning. Experiments on diverse detection
benchmarks show that the proposed strategy can produce state-of-the-art
performance in both online and offline settings. To the best of our knowledge,
this is the first work to address online and offline adaptation settings for
object detection. Code at https://github.com/Vibashan/online-od
Related papers
- Progressive Conservative Adaptation for Evolving Target Domains [76.9274842289221]
Conventional domain adaptation typically transfers knowledge from a source domain to a stationary target domain.
Restoring and adapting to such target data results in escalating computational and resource consumption over time.
We propose a simple yet effective approach, termed progressive conservative adaptation (PCAda)
arXiv Detail & Related papers (2024-02-07T04:11:25Z) - Unsupervised Domain Adaptation for Anatomical Landmark Detection [5.070344284426738]
We propose a novel framework for anatomical landmark detection under the setting of unsupervised domain adaptation (UDA)
The framework leverages self-training and domain adversarial learning to address the domain gap during adaptation.
Our experiments on cephalometric and lung landmark detection show the effectiveness of the method, which reduces the domain gap by a large margin and outperforms other UDA methods consistently.
arXiv Detail & Related papers (2023-08-25T10:22:13Z) - ViDA: Homeostatic Visual Domain Adapter for Continual Test Time Adaptation [48.039156140237615]
A Continual Test-Time Adaptation task is proposed to adapt the pre-trained model to continually changing target domains.
We design a Visual Domain Adapter (ViDA) for CTTA, explicitly handling both domain-specific and domain-shared knowledge.
Our proposed method achieves state-of-the-art performance in both classification and segmentation CTTA tasks.
arXiv Detail & Related papers (2023-06-07T11:18:53Z) - One-Shot Domain Adaptive and Generalizable Semantic Segmentation with
Class-Aware Cross-Domain Transformers [96.51828911883456]
Unsupervised sim-to-real domain adaptation (UDA) for semantic segmentation aims to improve the real-world test performance of a model trained on simulated data.
Traditional UDA often assumes that there are abundant unlabeled real-world data samples available during training for the adaptation.
We explore the one-shot unsupervised sim-to-real domain adaptation (OSUDA) and generalization problem, where only one real-world data sample is available.
arXiv Detail & Related papers (2022-12-14T15:54:15Z) - Instance Relation Graph Guided Source-Free Domain Adaptive Object
Detection [79.89082006155135]
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift.
UDA methods try to align the source and target representations to improve the generalization on the target domain.
The Source-Free Adaptation Domain (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data.
arXiv Detail & Related papers (2022-03-29T17:50:43Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Unsupervised Domain Adaptive 3D Detection with Multi-Level Consistency [90.71745178767203]
Deep learning-based 3D object detection has achieved unprecedented success with the advent of large-scale autonomous driving datasets.
Existing 3D domain adaptive detection methods often assume prior access to the target domain annotations, which is rarely feasible in the real world.
We study a more realistic setting, unsupervised 3D domain adaptive detection, which only utilizes source domain annotations.
arXiv Detail & Related papers (2021-07-23T17:19:23Z) - Domain Adaptive YOLO for One-Stage Cross-Domain Detection [4.596221278839825]
Domain Adaptive YOLO (DA-YOLO) is proposed to improve cross-domain performance for one-stage detectors.
We evaluate our proposed method on popular datasets like Cityscapes, KITTI, SIM10K and etc.
arXiv Detail & Related papers (2021-06-26T04:17:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.