AWADA: Attention-Weighted Adversarial Domain Adaptation for Object
Detection
- URL: http://arxiv.org/abs/2208.14662v1
- Date: Wed, 31 Aug 2022 07:20:25 GMT
- Title: AWADA: Attention-Weighted Adversarial Domain Adaptation for Object
Detection
- Authors: Maximilian Menke, Thomas Wenzel, Andreas Schwung
- Abstract summary: AWADA is an Attention-Weighted Adrialversa Domain Adaptation framework for creating a feedback loop between style-transformation and detection task.
We show that AWADA reaches state-of-the-art unsupervised domain adaptation object detection performance in the commonly used benchmarks for tasks such as synthetic-to-real, adverse weather and cross-camera adaptation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Object detection networks have reached an impressive performance level, yet a
lack of suitable data in specific applications often limits it in practice.
Typically, additional data sources are utilized to support the training task.
In these, however, domain gaps between different data sources pose a challenge
in deep learning. GAN-based image-to-image style-transfer is commonly applied
to shrink the domain gap, but is unstable and decoupled from the object
detection task. We propose AWADA, an Attention-Weighted Adversarial Domain
Adaptation framework for creating a feedback loop between style-transformation
and detection task. By constructing foreground object attention maps from
object detector proposals, we focus the transformation on foreground object
regions and stabilize style-transfer training. In extensive experiments and
ablation studies, we show that AWADA reaches state-of-the-art unsupervised
domain adaptation object detection performance in the commonly used benchmarks
for tasks such as synthetic-to-real, adverse weather and cross-camera
adaptation.
Related papers
- Domain Generalization of 3D Object Detection by Density-Resampling [14.510085711178217]
Point-cloud-based 3D object detection suffers from performance degradation when encountering data with novel domain gaps.
We propose an SDG method to improve the generalizability of 3D object detection to unseen target domains.
Our work introduces a novel data augmentation method and contributes a new multi-task learning strategy in the methodology.
arXiv Detail & Related papers (2023-11-17T20:01:29Z) - Progressive Domain Adaptation with Contrastive Learning for Object
Detection in the Satellite Imagery [0.0]
State-of-the-art object detection methods largely fail to identify small and dense objects.
We propose a small object detection pipeline that improves the feature extraction process.
We show we can alleviate the degradation of object identification in previously unseen datasets.
arXiv Detail & Related papers (2022-09-06T15:16:35Z) - Instance Relation Graph Guided Source-Free Domain Adaptive Object
Detection [79.89082006155135]
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift.
UDA methods try to align the source and target representations to improve the generalization on the target domain.
The Source-Free Adaptation Domain (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data.
arXiv Detail & Related papers (2022-03-29T17:50:43Z) - Decoupled Adaptation for Cross-Domain Object Detection [69.5852335091519]
Cross-domain object detection is more challenging than object classification.
D-adapt achieves a state-of-the-art results on four cross-domain object detection tasks.
arXiv Detail & Related papers (2021-10-06T08:43:59Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Unsupervised Domain Adaption of Object Detectors: A Survey [87.08473838767235]
Recent advances in deep learning have led to the development of accurate and efficient models for various computer vision applications.
Learning highly accurate models relies on the availability of datasets with a large number of annotated images.
Due to this, model performance drops drastically when evaluated on label-scarce datasets having visually distinct images.
arXiv Detail & Related papers (2021-05-27T23:34:06Z) - Robust Object Detection via Instance-Level Temporal Cycle Confusion [89.1027433760578]
We study the effectiveness of auxiliary self-supervised tasks to improve the out-of-distribution generalization of object detectors.
Inspired by the principle of maximum entropy, we introduce a novel self-supervised task, instance-level temporal cycle confusion (CycConf)
For each object, the task is to find the most different object proposals in the adjacent frame in a video and then cycle back to itself for self-supervision.
arXiv Detail & Related papers (2021-04-16T21:35:08Z) - Multi-Target Domain Adaptation via Unsupervised Domain Classification
for Weather Invariant Object Detection [1.773576418078547]
The performance of an object detector significantly degrades if the weather of the training images is different from that of test images.
We propose a novel unsupervised domain classification method which can be used to generalize single-target domain adaptation methods to multi-target domains.
We conduct the experiments on Cityscapes dataset and its synthetic variants, i.e. foggy, rainy, and night.
arXiv Detail & Related papers (2021-03-25T16:59:35Z) - Unsupervised Domain Adaptation for Spatio-Temporal Action Localization [69.12982544509427]
S-temporal action localization is an important problem in computer vision.
We propose an end-to-end unsupervised domain adaptation algorithm.
We show that significant performance gain can be achieved when spatial and temporal features are adapted separately or jointly.
arXiv Detail & Related papers (2020-10-19T04:25:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.