Dual-Curriculum Teacher for Domain-Inconsistent Object Detection in
Autonomous Driving
- URL: http://arxiv.org/abs/2210.08748v1
- Date: Mon, 17 Oct 2022 05:00:27 GMT
- Title: Dual-Curriculum Teacher for Domain-Inconsistent Object Detection in
Autonomous Driving
- Authors: Longhui Yu, Yifan Zhang, Lanqing Hong, Fei Chen, Zhenguo Li
- Abstract summary: In autonomous driving, data are usually collected from different scenarios, such as different weather conditions or different times in a day.
It involves two kinds of distribution shifts among different domains, including (1) data distribution discrepancy, and (2) class distribution shifts.
We propose Dual-Curriculum Teacher (DucTeacher) to address this problem.
- Score: 43.573192013344055
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Object detection for autonomous vehicles has received increasing attention in
recent years, where labeled data are often expensive while unlabeled data can
be collected readily, calling for research on semi-supervised learning for this
area. Existing semi-supervised object detection (SSOD) methods usually assume
that the labeled and unlabeled data come from the same data distribution. In
autonomous driving, however, data are usually collected from different
scenarios, such as different weather conditions or different times in a day.
Motivated by this, we study a novel but challenging domain inconsistent SSOD
problem. It involves two kinds of distribution shifts among different domains,
including (1) data distribution discrepancy, and (2) class distribution shifts,
making existing SSOD methods suffer from inaccurate pseudo-labels and hurting
model performance. To address this problem, we propose a novel method, namely
Dual-Curriculum Teacher (DucTeacher). Specifically, DucTeacher consists of two
curriculums, i.e., (1) domain evolving curriculum seeks to learn from the data
progressively to handle data distribution discrepancy by estimating the
similarity between domains, and (2) distribution matching curriculum seeks to
estimate the class distribution for each unlabeled domain to handle class
distribution shifts. In this way, DucTeacher can calibrate biased pseudo-labels
and handle the domain-inconsistent SSOD problem effectively. DucTeacher shows
its advantages on SODA10M, the largest public semi-supervised autonomous
driving dataset, and COCO, a widely used SSOD benchmark. Experiments show that
DucTeacher achieves new state-of-the-art performance on SODA10M with 2.2 mAP
improvement and on COCO with 0.8 mAP improvement.
Related papers
- Dynamic Retraining-Updating Mean Teacher for Source-Free Object Detection [8.334498654271371]
Unsupervised domain adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
This study focuses on source-free object detection (SFOD), which adapts a source-trained detector to an unlabeled target domain without using labeled source data.
arXiv Detail & Related papers (2024-07-23T14:12:57Z) - Tackling Long-Tailed Category Distribution Under Domain Shifts [50.21255304847395]
Existing approaches cannot handle the scenario where both issues exist.
We designed three novel core functional blocks including Distribution Calibrated Classification Loss, Visual-Semantic Mapping and Semantic-Similarity Guided Augmentation.
Two new datasets were proposed for this problem, named AWA2-LTS and ImageNet-LTS.
arXiv Detail & Related papers (2022-07-20T19:07:46Z) - Extending the WILDS Benchmark for Unsupervised Adaptation [186.90399201508953]
We present the WILDS 2.0 update, which extends 8 of the 10 datasets in the WILDS benchmark of distribution shifts to include curated unlabeled data.
These datasets span a wide range of applications (from histology to wildlife conservation), tasks (classification, regression, and detection), and modalities.
We systematically benchmark state-of-the-art methods that leverage unlabeled data, including domain-invariant, self-training, and self-supervised methods.
arXiv Detail & Related papers (2021-12-09T18:32:38Z) - Cross-Region Domain Adaptation for Class-level Alignment [32.586107376036075]
We propose a method that applies adversarial training to align two feature distributions in the target domain.
It uses a self-training framework to split the image into two regions, which form two distributions to align in the feature space.
We term this approach cross-region adaptation (CRA) to distinguish from the previous methods of aligning different domain distributions.
arXiv Detail & Related papers (2021-09-14T04:13:35Z) - Unsupervised domain adaptation via double classifiers based on high
confidence pseudo label [8.132250810529873]
Unsupervised domain adaptation (UDA) aims to solve the problem of knowledge transfer from labeled source domain to unlabeled target domain.
Many domain adaptation (DA) methods use centroid to align the local distribution of different domains, that is, to align different classes.
This work rethinks what is the alignment between different domains, and studies how to achieve the real alignment between different domains.
arXiv Detail & Related papers (2021-05-11T00:51:31Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z) - Deep Co-Training with Task Decomposition for Semi-Supervised Domain
Adaptation [80.55236691733506]
Semi-supervised domain adaptation (SSDA) aims to adapt models trained from a labeled source domain to a different but related target domain.
We propose to explicitly decompose the SSDA task into two sub-tasks: a semi-supervised learning (SSL) task in the target domain and an unsupervised domain adaptation (UDA) task across domains.
arXiv Detail & Related papers (2020-07-24T17:57:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.