Source-free Domain Adaptive Object Detection in Remote Sensing Images
- URL: http://arxiv.org/abs/2401.17916v1
- Date: Wed, 31 Jan 2024 15:32:44 GMT
- Title: Source-free Domain Adaptive Object Detection in Remote Sensing Images
- Authors: Weixing Liu, Jun Liu, Xin Su, Han Nie, Bin Luo
- Abstract summary: We propose a source-free object detection (SFOD) setting for RS images.
It aims to perform target domain adaptation using only the source pre-trained model.
Our method does not require access to source domain RS images.
- Score: 11.19538606490404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent studies have used unsupervised domain adaptive object detection
(UDAOD) methods to bridge the domain gap in remote sensing (RS) images.
However, UDAOD methods typically assume that the source domain data can be
accessed during the domain adaptation process. This setting is often
impractical in the real world due to RS data privacy and transmission
difficulty. To address this challenge, we propose a practical source-free
object detection (SFOD) setting for RS images, which aims to perform target
domain adaptation using only the source pre-trained model. We propose a new
SFOD method for RS images consisting of two parts: perturbed domain generation
and alignment. The proposed multilevel perturbation constructs the perturbed
domain in a simple yet efficient form by perturbing the domain-variant features
at the image level and feature level according to the color and style bias. The
proposed multilevel alignment calculates feature and label consistency between
the perturbed domain and the target domain across the teacher-student network,
and introduces the distillation of feature prototype to mitigate the noise of
pseudo-labels. By requiring the detector to be consistent in the perturbed
domain and the target domain, the detector is forced to focus on
domaininvariant features. Extensive results of three synthetic-to-real
experiments and three cross-sensor experiments have validated the effectiveness
of our method which does not require access to source domain RS images.
Furthermore, experiments on computer vision datasets show that our method can
be extended to other fields as well. Our code will be available at:
https://weixliu.github.io/ .
Related papers
- FIT: Frequency-based Image Translation for Domain Adaptive Object
Detection [8.635264598464355]
We propose a novel Frequency-based Image Translation (FIT) framework for Domain adaptive object detection (DAOD)
First, by keeping domain-invariant frequency components and swapping domain-specific ones, we conduct image translation to reduce domain shift at the input level.
Second, hierarchical adversarial feature learning is utilized to further mitigate the domain gap at the feature level.
arXiv Detail & Related papers (2023-03-07T07:30:08Z) - Cross-Modality Domain Adaptation for Freespace Detection: A Simple yet
Effective Baseline [21.197212665408262]
Freespace detection aims at classifying each pixel of the image captured by the camera as drivable or non-drivable.
We develop a cross-modality domain adaptation framework which exploits both RGB images and surface normal maps generated from depth images.
To better bridge the domain gap between source domain (synthetic data) and target domain (real-world data), we also propose a Selective Feature Alignment (SFA) module.
arXiv Detail & Related papers (2022-10-06T15:31:49Z) - Instance Relation Graph Guided Source-Free Domain Adaptive Object
Detection [79.89082006155135]
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift.
UDA methods try to align the source and target representations to improve the generalization on the target domain.
The Source-Free Adaptation Domain (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data.
arXiv Detail & Related papers (2022-03-29T17:50:43Z) - An Unsupervised Domain Adaptive Approach for Multimodal 2D Object
Detection in Adverse Weather Conditions [5.217255784808035]
We propose an unsupervised domain adaptation framework to bridge the domain gap between source and target domains.
We use a data augmentation scheme that simulates weather distortions to add domain confusion and prevent overfitting on the source data.
Experiments performed on the DENSE dataset show that our method can substantially alleviate the domain gap.
arXiv Detail & Related papers (2022-03-07T18:10:40Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - Frequency Spectrum Augmentation Consistency for Domain Adaptive Object
Detection [107.52026281057343]
We introduce a Frequency Spectrum Augmentation Consistency (FSAC) framework with four different low-frequency filter operations.
In the first stage, we utilize all the original and augmented source data to train an object detector.
In the second stage, augmented source and target data with pseudo labels are adopted to perform the self-training for prediction consistency.
arXiv Detail & Related papers (2021-12-16T04:07:01Z) - Unsupervised Domain Adaptive 3D Detection with Multi-Level Consistency [90.71745178767203]
Deep learning-based 3D object detection has achieved unprecedented success with the advent of large-scale autonomous driving datasets.
Existing 3D domain adaptive detection methods often assume prior access to the target domain annotations, which is rarely feasible in the real world.
We study a more realistic setting, unsupervised 3D domain adaptive detection, which only utilizes source domain annotations.
arXiv Detail & Related papers (2021-07-23T17:19:23Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Multi-Target Domain Adaptation via Unsupervised Domain Classification
for Weather Invariant Object Detection [1.773576418078547]
The performance of an object detector significantly degrades if the weather of the training images is different from that of test images.
We propose a novel unsupervised domain classification method which can be used to generalize single-target domain adaptation methods to multi-target domains.
We conduct the experiments on Cityscapes dataset and its synthetic variants, i.e. foggy, rainy, and night.
arXiv Detail & Related papers (2021-03-25T16:59:35Z) - Domain Contrast for Domain Adaptive Object Detection [79.77438747622135]
Domain Contrast (DC) is an approach inspired by contrastive learning for training domain adaptive detectors.
DC can be applied at either image level or region level, consistently improving detectors' transferability and discriminability.
arXiv Detail & Related papers (2020-06-26T08:45:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.