Domain Adaptation based Object Detection for Autonomous Driving in Foggy and Rainy Weather
- URL: http://arxiv.org/abs/2307.09676v4
- Date: Tue, 20 Aug 2024 22:36:09 GMT
- Title: Domain Adaptation based Object Detection for Autonomous Driving in Foggy and Rainy Weather
- Authors: Jinlong Li, Runsheng Xu, Xinyu Liu, Jin Ma, Baolu Li, Qin Zou, Jiaqi Ma, Hongkai Yu,
- Abstract summary: Due to the domain gap, a detection model trained under clear weather may not perform well in foggy and rainy conditions.
To bridge the domain gap and improve the performance of object detection in foggy and rainy weather, this paper presents a novel framework for domain-adaptive object detection.
- Score: 44.711384869027775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Typically, object detection methods for autonomous driving that rely on supervised learning make the assumption of a consistent feature distribution between the training and testing data, this such assumption may fail in different weather conditions. Due to the domain gap, a detection model trained under clear weather may not perform well in foggy and rainy conditions. Overcoming detection bottlenecks in foggy and rainy weather is a real challenge for autonomous vehicles deployed in the wild. To bridge the domain gap and improve the performance of object detection in foggy and rainy weather, this paper presents a novel framework for domain-adaptive object detection. The adaptations at both the image-level and object-level are intended to minimize the differences in image style and object appearance between domains. Furthermore, in order to improve the model's performance on challenging examples, we introduce a novel adversarial gradient reversal layer that conducts adversarial mining on difficult instances in addition to domain adaptation. Additionally, we suggest generating an auxiliary domain through data augmentation to enforce a new domain-level metric regularization. Experimental findings on public benchmark exhibit a substantial enhancement in object detection specifically for foggy and rainy driving scenarios.
Related papers
- Enhancing Lidar-based Object Detection in Adverse Weather using Offset
Sequences in Time [1.1725016312484975]
Lidar-based object detection is significantly affected by adverse weather conditions such as rain and fog.
Our research provides a comprehensive study of effective methods for mitigating the effects of adverse weather on the reliability of lidar-based object detection.
arXiv Detail & Related papers (2024-01-17T08:31:58Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - DA-RAW: Domain Adaptive Object Detection for Real-World Adverse Weather Conditions [2.048226951354646]
We present an unsupervised domain adaptation framework for object detection in adverse weather conditions.
Our method resolves the style gap by concentrating on style-related information of high-level features.
Using self-supervised contrastive learning, our framework then reduces the weather gap and acquires instance features that are robust to weather corruption.
arXiv Detail & Related papers (2023-09-15T04:37:28Z) - Domain Adaptive Object Detection for Autonomous Driving under Foggy
Weather [25.964194141706923]
This paper proposes a novel domain adaptive object detection framework for autonomous driving under foggy weather.
Our method leverages both image-level and object-level adaptation to diminish the domain discrepancy in image style and object appearance.
Experimental results on public benchmarks show the effectiveness and accuracy of the proposed method.
arXiv Detail & Related papers (2022-10-27T05:09:10Z) - Unsupervised Foggy Scene Understanding via Self Spatial-Temporal Label
Diffusion [51.11295961195151]
We exploit the characteristics of the foggy image sequence of driving scenes to densify the confident pseudo labels.
Based on the two discoveries of local spatial similarity and adjacent temporal correspondence of the sequential image data, we propose a novel Target-Domain driven pseudo label Diffusion scheme.
Our scheme helps the adaptive model achieve 51.92% and 53.84% mean intersection-over-union (mIoU) on two publicly available natural foggy datasets.
arXiv Detail & Related papers (2022-06-10T05:16:50Z) - Cycle and Semantic Consistent Adversarial Domain Adaptation for Reducing
Simulation-to-Real Domain Shift in LiDAR Bird's Eye View [110.83289076967895]
We present a BEV domain adaptation method based on CycleGAN that uses prior semantic classification in order to preserve the information of small objects of interest during the domain adaptation process.
The quality of the generated BEVs has been evaluated using a state-of-the-art 3D object detection framework at KITTI 3D Object Detection Benchmark.
arXiv Detail & Related papers (2021-04-22T12:47:37Z) - Multi-Target Domain Adaptation via Unsupervised Domain Classification
for Weather Invariant Object Detection [1.773576418078547]
The performance of an object detector significantly degrades if the weather of the training images is different from that of test images.
We propose a novel unsupervised domain classification method which can be used to generalize single-target domain adaptation methods to multi-target domains.
We conduct the experiments on Cityscapes dataset and its synthetic variants, i.e. foggy, rainy, and night.
arXiv Detail & Related papers (2021-03-25T16:59:35Z) - Unsupervised Domain Adaptation for Spatio-Temporal Action Localization [69.12982544509427]
S-temporal action localization is an important problem in computer vision.
We propose an end-to-end unsupervised domain adaptation algorithm.
We show that significant performance gain can be achieved when spatial and temporal features are adapted separately or jointly.
arXiv Detail & Related papers (2020-10-19T04:25:10Z) - Object Detection Under Rainy Conditions for Autonomous Vehicles: A
Review of State-of-the-Art and Emerging Techniques [5.33024001730262]
This paper presents a tutorial on state-of-the-art techniques for mitigating the influence of rainy conditions on an autonomous vehicle's ability to detect objects.
Our goal includes surveying and analyzing the performance of object detection methods trained and tested using visual data captured under clear and rainy conditions.
arXiv Detail & Related papers (2020-06-30T02:05:10Z) - Cross-domain Object Detection through Coarse-to-Fine Feature Adaptation [62.29076080124199]
This paper proposes a novel coarse-to-fine feature adaptation approach to cross-domain object detection.
At the coarse-grained stage, foreground regions are extracted by adopting the attention mechanism, and aligned according to their marginal distributions.
At the fine-grained stage, we conduct conditional distribution alignment of foregrounds by minimizing the distance of global prototypes with the same category but from different domains.
arXiv Detail & Related papers (2020-03-23T13:40:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.