iFAN: Image-Instance Full Alignment Networks for Adaptive Object
Detection
- URL: http://arxiv.org/abs/2003.04132v1
- Date: Mon, 9 Mar 2020 13:27:06 GMT
- Title: iFAN: Image-Instance Full Alignment Networks for Adaptive Object
Detection
- Authors: Chenfan Zhuang, Xintong Han, Weilin Huang, Matthew R. Scott
- Abstract summary: iFAN aims to precisely align feature distributions on both image and instance levels.
It outperforms state-of-the-art methods with a boost of 10%+ AP over the source-only baseline.
- Score: 48.83883375118966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training an object detector on a data-rich domain and applying it to a
data-poor one with limited performance drop is highly attractive in industry,
because it saves huge annotation cost. Recent research on unsupervised domain
adaptive object detection has verified that aligning data distributions between
source and target images through adversarial learning is very useful. The key
is when, where and how to use it to achieve best practice. We propose
Image-Instance Full Alignment Networks (iFAN) to tackle this problem by
precisely aligning feature distributions on both image and instance levels: 1)
Image-level alignment: multi-scale features are roughly aligned by training
adversarial domain classifiers in a hierarchically-nested fashion. 2) Full
instance-level alignment: deep semantic information and elaborate instance
representations are fully exploited to establish a strong relationship among
categories and domains. Establishing these correlations is formulated as a
metric learning problem by carefully constructing instance pairs.
Above-mentioned adaptations can be integrated into an object detector (e.g.
Faster RCNN), resulting in an end-to-end trainable framework where multiple
alignments can work collaboratively in a coarse-tofine manner. In two domain
adaptation tasks: synthetic-to-real (SIM10K->Cityscapes) and normal-to-foggy
weather (Cityscapes->Foggy Cityscapes), iFAN outperforms the state-of-the-art
methods with a boost of 10%+ AP over the source-only baseline.
Related papers
- DATR: Unsupervised Domain Adaptive Detection Transformer with Dataset-Level Adaptation and Prototypical Alignment [7.768332621617199]
We introduce a strong DETR-based detector named Domain Adaptive detection TRansformer ( DATR) for unsupervised domain adaptation of object detection.
Our proposed DATR incorporates a mean-teacher based self-training framework, utilizing pseudo-labels generated by the teacher model to further mitigate domain bias.
Experiments demonstrate superior performance and generalization capabilities of our proposed DATR in multiple domain adaptation scenarios.
arXiv Detail & Related papers (2024-05-20T03:48:45Z) - Improving Human-Object Interaction Detection via Virtual Image Learning [68.56682347374422]
Human-Object Interaction (HOI) detection aims to understand the interactions between humans and objects.
In this paper, we propose to alleviate the impact of such an unbalanced distribution via Virtual Image Leaning (VIL)
A novel label-to-image approach, Multiple Steps Image Creation (MUSIC), is proposed to create a high-quality dataset that has a consistent distribution with real images.
arXiv Detail & Related papers (2023-08-04T10:28:48Z) - Seeking Similarities over Differences: Similarity-based Domain Alignment
for Adaptive Object Detection [86.98573522894961]
We propose a framework that generalizes the components commonly used by Unsupervised Domain Adaptation (UDA) algorithms for detection.
Specifically, we propose a novel UDA algorithm, ViSGA, that leverages the best design choices and introduces a simple but effective method to aggregate features at instance-level.
We show that both similarity-based grouping and adversarial training allows our model to focus on coarsely aligning feature groups, without being forced to match all instances across loosely aligned domains.
arXiv Detail & Related papers (2021-10-04T13:09:56Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - PixMatch: Unsupervised Domain Adaptation via Pixelwise Consistency
Training [4.336877104987131]
Unsupervised domain adaptation is a promising technique for semantic segmentation.
We present a novel framework for unsupervised domain adaptation based on the notion of target-domain consistency training.
Our approach is simpler, easier to implement, and more memory-efficient during training.
arXiv Detail & Related papers (2021-05-17T19:36:28Z) - Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining
and Consistency [93.89773386634717]
Visual domain adaptation involves learning to classify images from a target visual domain using labels available in a different source domain.
We show that in the presence of a few target labels, simple techniques like self-supervision (via rotation prediction) and consistency regularization can be effective without any adversarial alignment to learn a good target classifier.
Our Pretraining and Consistency (PAC) approach, can achieve state of the art accuracy on this semi-supervised domain adaptation task, surpassing multiple adversarial domain alignment methods, across multiple datasets.
arXiv Detail & Related papers (2021-01-29T18:40:17Z) - Cross-domain Detection via Graph-induced Prototype Alignment [114.8952035552862]
We propose a Graph-induced Prototype Alignment (GPA) framework to seek for category-level domain alignment.
In addition, in order to alleviate the negative effect of class-imbalance on domain adaptation, we design a Class-reweighted Contrastive Loss.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-03-28T17:46:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.