Decoupled Adaptation for Cross-Domain Object Detection
- URL: http://arxiv.org/abs/2110.02578v1
- Date: Wed, 6 Oct 2021 08:43:59 GMT
- Title: Decoupled Adaptation for Cross-Domain Object Detection
- Authors: Junguang Jiang, Baixu Chen, Jianmin Wang, Mingsheng Long
- Abstract summary: Cross-domain object detection is more challenging than object classification.
D-adapt achieves a state-of-the-art results on four cross-domain object detection tasks.
- Score: 69.5852335091519
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cross-domain object detection is more challenging than object classification
since multiple objects exist in an image and the location of each object is
unknown in the unlabeled target domain. As a result, when we adapt features of
different objects to enhance the transferability of the detector, the features
of the foreground and the background are easy to be confused, which may hurt
the discriminability of the detector. Besides, previous methods focused on
category adaptation but ignored another important part for object detection,
i.e., the adaptation on bounding box regression. To this end, we propose
D-adapt, namely Decoupled Adaptation, to decouple the adversarial adaptation
and the training of the detector. Besides, we fill the blank of regression
domain adaptation in object detection by introducing a bounding box adaptor.
Experiments show that D-adapt achieves state-of-the-art results on four
cross-domain object detection tasks and yields 17% and 21% relative improvement
on benchmark datasets Clipart1k and Comic2k in particular.
Related papers
- AcroFOD: An Adaptive Method for Cross-domain Few-shot Object Detection [59.10314662986463]
Cross-domain few-shot object detection aims to adapt object detectors in the target domain with a few annotated target data.
The proposed method achieves state-of-the-art performance on multiple benchmarks.
arXiv Detail & Related papers (2022-09-22T10:23:40Z) - AWADA: Attention-Weighted Adversarial Domain Adaptation for Object
Detection [0.0]
AWADA is an Attention-Weighted Adrialversa Domain Adaptation framework for creating a feedback loop between style-transformation and detection task.
We show that AWADA reaches state-of-the-art unsupervised domain adaptation object detection performance in the commonly used benchmarks for tasks such as synthetic-to-real, adverse weather and cross-camera adaptation.
arXiv Detail & Related papers (2022-08-31T07:20:25Z) - End-to-End Instance Edge Detection [29.650295133113183]
Edge detection has long been an important problem in the field of computer vision.
Previous works have explored category-agnostic or category-aware edge detection.
In this paper, we explore edge detection in the context of object instances.
arXiv Detail & Related papers (2022-04-06T15:32:21Z) - Frequency Spectrum Augmentation Consistency for Domain Adaptive Object
Detection [107.52026281057343]
We introduce a Frequency Spectrum Augmentation Consistency (FSAC) framework with four different low-frequency filter operations.
In the first stage, we utilize all the original and augmented source data to train an object detector.
In the second stage, augmented source and target data with pseudo labels are adopted to perform the self-training for prediction consistency.
arXiv Detail & Related papers (2021-12-16T04:07:01Z) - AdaCon: Adaptive Context-Aware Object Detection for Resource-Constrained
Embedded Devices [2.5345835184316536]
Convolutional Neural Networks achieve state-of-the-art accuracy in object detection tasks.
They have large computational and energy requirements that challenge their deployment on resource-constrained edge devices.
In this paper, we leverage the prior knowledge about the probabilities that different object categories can occur jointly to increase the efficiency of object detection models.
Our experiments using COCO dataset show that our adaptive object detection model achieves up to 45% reduction in the energy consumption, and up to 27% reduction in the latency, with a small loss in the average precision (AP) of object detection.
arXiv Detail & Related papers (2021-08-16T01:21:55Z) - Unsupervised Domain Adaption of Object Detectors: A Survey [87.08473838767235]
Recent advances in deep learning have led to the development of accurate and efficient models for various computer vision applications.
Learning highly accurate models relies on the availability of datasets with a large number of annotated images.
Due to this, model performance drops drastically when evaluated on label-scarce datasets having visually distinct images.
arXiv Detail & Related papers (2021-05-27T23:34:06Z) - Multi-Target Domain Adaptation via Unsupervised Domain Classification
for Weather Invariant Object Detection [1.773576418078547]
The performance of an object detector significantly degrades if the weather of the training images is different from that of test images.
We propose a novel unsupervised domain classification method which can be used to generalize single-target domain adaptation methods to multi-target domains.
We conduct the experiments on Cityscapes dataset and its synthetic variants, i.e. foggy, rainy, and night.
arXiv Detail & Related papers (2021-03-25T16:59:35Z) - Slender Object Detection: Diagnoses and Improvements [74.40792217534]
In this paper, we are concerned with the detection of a particular type of objects with extreme aspect ratios, namely textbfslender objects.
For a classical object detection method, a drastic drop of $18.9%$ mAP on COCO is observed, if solely evaluated on slender objects.
arXiv Detail & Related papers (2020-11-17T09:39:42Z) - Bi-Dimensional Feature Alignment for Cross-Domain Object Detection [71.85594342357815]
We propose a novel unsupervised cross-domain detection model.
It exploits the annotated data in a source domain to train an object detector for a different target domain.
The proposed model mitigates the cross-domain representation divergence for object detection.
arXiv Detail & Related papers (2020-11-14T03:03:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.