Attention-based Domain Adaptation for Single Stage Detectors
- URL: http://arxiv.org/abs/2106.07283v1
- Date: Mon, 14 Jun 2021 10:30:44 GMT
- Title: Attention-based Domain Adaptation for Single Stage Detectors
- Authors: Vidit and Mathieu Salzmann
- Abstract summary: We introduce an attention mechanism that lets us identify the important regions on which adaptation should focus.
Our approach is generic and can be integrated into any single-stage detector.
For an equivalent single-stage architecture, our method outperforms the state-of-the-art domain adaptation technique.
- Score: 75.88557558238841
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While domain adaptation has been used to improve the performance of object
detectors when the training and test data follow different distributions,
previous work has mostly focused on two-stage detectors. This is because their
use of region proposals makes it possible to perform local adaptation, which
has been shown to significantly improve the adaptation effectiveness. Here, by
contrast, we target single-stage architectures, which are better suited to
resource-constrained detection than two-stage ones but do not provide region
proposals. To nonetheless benefit from the strength of local adaptation, we
introduce an attention mechanism that lets us identify the important regions on
which adaptation should focus. Our approach is generic and can be integrated
into any single-stage detector. We demonstrate this on standard benchmark
datasets by applying it to both SSD and YOLO. Furthermore, for an equivalent
single-stage architecture, our method outperforms the state-of-the-art domain
adaptation technique even though it was designed specifically for this
particular detector.
Related papers
- Improving Single Domain-Generalized Object Detection: A Focus on Diversification and Alignment [17.485775402656127]
A base detector can outperform existing methods for single domain generalization by a good margin.
We introduce a method to align detections from multiple views, considering both classification and localization outputs.
Our approach is detector-agnostic and can be seamlessly applied to both single-stage and two-stage detectors.
arXiv Detail & Related papers (2024-05-23T12:29:25Z) - Point-Level Region Contrast for Object Detection Pre-Training [147.47349344401806]
We present point-level region contrast, a self-supervised pre-training approach for the task of object detection.
Our approach performs contrastive learning by directly sampling individual point pairs from different regions.
Compared to an aggregated representation per region, our approach is more robust to the change in input region quality.
arXiv Detail & Related papers (2022-02-09T18:56:41Z) - Domain Adaptive Semantic Segmentation with Regional Contrastive
Consistency Regularization [19.279884432843822]
We propose a novel and fully end-to-end trainable approach, called regional contrastive consistency regularization (RCCR) for domain adaptive semantic segmentation.
Our core idea is to pull the similar regional features extracted from the same location of different images to be closer, and meanwhile push the features from the different locations of the two images to be separated.
arXiv Detail & Related papers (2021-10-11T11:45:00Z) - Domain Adaptive YOLO for One-Stage Cross-Domain Detection [4.596221278839825]
Domain Adaptive YOLO (DA-YOLO) is proposed to improve cross-domain performance for one-stage detectors.
We evaluate our proposed method on popular datasets like Cityscapes, KITTI, SIM10K and etc.
arXiv Detail & Related papers (2021-06-26T04:17:42Z) - Enhancing Object Detection for Autonomous Driving by Optimizing Anchor
Generation and Addressing Class Imbalance [0.0]
This study presents an enhanced 2D object detector based on Faster R-CNN that is better suited for the context of autonomous vehicles.
The proposed modifications over the Faster R-CNN do not increase computational cost and can easily be extended to optimize other anchor-based detection frameworks.
arXiv Detail & Related papers (2021-04-08T16:58:31Z) - On Evolving Attention Towards Domain Adaptation [110.57454902557767]
This paper proposes EvoADA: a novel framework to evolve the attention configuration for a given UDA task without human intervention.
Experiments on various kinds of cross-domain benchmarks, i.e., Office-31, Office-Home, CUB-Paintings, and Duke-Market-1510, reveal that the proposed EvoADA consistently boosts multiple state-of-the-art domain adaptation approaches.
arXiv Detail & Related papers (2021-03-25T01:50:28Z) - Unsupervised Domain Adaptation for Spatio-Temporal Action Localization [69.12982544509427]
S-temporal action localization is an important problem in computer vision.
We propose an end-to-end unsupervised domain adaptation algorithm.
We show that significant performance gain can be achieved when spatial and temporal features are adapted separately or jointly.
arXiv Detail & Related papers (2020-10-19T04:25:10Z) - Collaborative Training between Region Proposal Localization and
Classification for Domain Adaptive Object Detection [121.28769542994664]
Domain adaptation for object detection tries to adapt the detector from labeled datasets to unlabeled ones for better performance.
In this paper, we are the first to reveal that the region proposal network (RPN) and region proposal classifier(RPC) demonstrate significantly different transferability when facing large domain gap.
arXiv Detail & Related papers (2020-09-17T07:39:52Z) - Cross-domain Object Detection through Coarse-to-Fine Feature Adaptation [62.29076080124199]
This paper proposes a novel coarse-to-fine feature adaptation approach to cross-domain object detection.
At the coarse-grained stage, foreground regions are extracted by adopting the attention mechanism, and aligned according to their marginal distributions.
At the fine-grained stage, we conduct conditional distribution alignment of foregrounds by minimizing the distance of global prototypes with the same category but from different domains.
arXiv Detail & Related papers (2020-03-23T13:40:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.