V2X-DGW: Domain Generalization for Multi-agent Perception under Adverse Weather Conditions
- URL: http://arxiv.org/abs/2403.11371v5
- Date: Tue, 24 Sep 2024 15:57:10 GMT
- Title: V2X-DGW: Domain Generalization for Multi-agent Perception under Adverse Weather Conditions
- Authors: Baolu Li, Jinlong Li, Xinyu Liu, Runsheng Xu, Zhengzhong Tu, Jiacheng Guo, Xiaopeng Li, Hongkai Yu,
- Abstract summary: We propose a Domain Generalization based approach, named V2X-DGW, for LiDAR-based 3D object detection on multi-agent perception system under adverse weather conditions.
Our research aims to not only maintain favorable multi-agent performance in the clean weather but also promote the performance in the unseen adverse weather conditions by learning only on the clean weather data.
- Score: 36.33595322964018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current LiDAR-based Vehicle-to-Everything (V2X) multi-agent perception systems have shown the significant success on 3D object detection. While these models perform well in the trained clean weather, they struggle in unseen adverse weather conditions with the domain gap. In this paper, we propose a Domain Generalization based approach, named V2X-DGW, for LiDAR-based 3D object detection on multi-agent perception system under adverse weather conditions. Our research aims to not only maintain favorable multi-agent performance in the clean weather but also promote the performance in the unseen adverse weather conditions by learning only on the clean weather data. To realize the Domain Generalization, we first introduce the Adaptive Weather Augmentation (AWA) to mimic the unseen adverse weather conditions, and then propose two alignments for generalizable representation learning: Trust-region Weather-invariant Alignment (TWA) and Agent-aware Contrastive Alignment (ACA). To evaluate this research, we add Fog, Rain, Snow conditions on two publicized multi-agent datasets based on physics-based models, resulting in two new datasets: OPV2V-w and V2XSet-w. Extensive experiments demonstrate that our V2X-DGW achieved significant improvements in the unseen adverse weathers.
Related papers
- UMDATrack: Unified Multi-Domain Adaptive Tracking Under Adverse Weather Conditions [73.71632291123008]
We propose UMDATrack, which maintains high-quality target state prediction under various adverse weather conditions.<n>Our code is available at https://github.com/Z-Z188/UMDATrack.
arXiv Detail & Related papers (2025-07-01T10:43:57Z) - Robust Single Object Tracking in LiDAR Point Clouds under Adverse Weather Conditions [4.133835011820212]
3D single object tracking in LiDAR point clouds is a critical task for outdoor perception.
Despite the impressive performance of current 3DSOT methods, evaluating them on clean datasets inadequately reflects their comprehensive performance.
One of the main obstacles is the lack of adverse weather benchmarks for the evaluation of 3DSOT.
arXiv Detail & Related papers (2025-01-13T08:44:35Z) - MonoWAD: Weather-Adaptive Diffusion Model for Robust Monocular 3D Object Detection [22.277210748714378]
Existing methods mainly focus on performing 3D detection in ideal weather conditions, characterized by scenarios with clear and optimal visibility.
We introduce MonoWAD, a novel weather-robust monocular 3D object detector with a weather-adaptive diffusion model.
Experiments under various weather conditions demonstrate that MonoWAD achieves weather-robust monocular 3D object detection.
arXiv Detail & Related papers (2024-07-23T12:58:49Z) - UniMix: Towards Domain Adaptive and Generalizable LiDAR Semantic Segmentation in Adverse Weather [55.95708988160047]
LiDAR semantic segmentation (LSS) is a critical task in autonomous driving.
Prior LSS methods are investigated and evaluated on datasets within the same domain in clear weather.
We propose UniMix, a universal method that enhances the adaptability and generalizability of LSS models.
arXiv Detail & Related papers (2024-04-08T02:02:15Z) - Genuine Knowledge from Practice: Diffusion Test-Time Adaptation for
Video Adverse Weather Removal [53.15046196592023]
We introduce test-time adaptation into adverse weather removal in videos.
We propose the first framework that integrates test-time adaptation into the iterative diffusion reverse process.
arXiv Detail & Related papers (2024-03-12T14:21:30Z) - DI-V2X: Learning Domain-Invariant Representation for
Vehicle-Infrastructure Collaborative 3D Object Detection [78.09431523221458]
DI-V2X aims to learn Domain-Invariant representations through a new distillation framework.
DI-V2X comprises three essential components: a domain-mixing instance augmentation (DMA) module, a progressive domain-invariant distillation (PDD) module, and a domain-adaptive fusion (DAF) module.
arXiv Detail & Related papers (2023-12-25T14:40:46Z) - DA-RAW: Domain Adaptive Object Detection for Real-World Adverse Weather Conditions [2.048226951354646]
We present an unsupervised domain adaptation framework for object detection in adverse weather conditions.
Our method resolves the style gap by concentrating on style-related information of high-level features.
Using self-supervised contrastive learning, our framework then reduces the weather gap and acquires instance features that are robust to weather corruption.
arXiv Detail & Related papers (2023-09-15T04:37:28Z) - Gradient-based Maximally Interfered Retrieval for Domain Incremental 3D
Object Detection [7.448224178732052]
We propose Gradient-based Maximally Interfered Retrieval (GMIR) for 3D object detection in all weather conditions.
GMIR retrieves samples from the previous domain dataset whose gradient vectors show maximal interference with the gradient vector of the current update.
Our 3D object detection experiments on the SeeingThroughFog (STF) dataset show that GMIR not only overcomes forgetting but also offers competitive performance.
arXiv Detail & Related papers (2023-04-27T18:35:20Z) - Domain Adaptive Object Detection for Autonomous Driving under Foggy
Weather [25.964194141706923]
This paper proposes a novel domain adaptive object detection framework for autonomous driving under foggy weather.
Our method leverages both image-level and object-level adaptation to diminish the domain discrepancy in image style and object appearance.
Experimental results on public benchmarks show the effectiveness and accuracy of the proposed method.
arXiv Detail & Related papers (2022-10-27T05:09:10Z) - An Unsupervised Domain Adaptive Approach for Multimodal 2D Object
Detection in Adverse Weather Conditions [5.217255784808035]
We propose an unsupervised domain adaptation framework to bridge the domain gap between source and target domains.
We use a data augmentation scheme that simulates weather distortions to add domain confusion and prevent overfitting on the source data.
Experiments performed on the DENSE dataset show that our method can substantially alleviate the domain gap.
arXiv Detail & Related papers (2022-03-07T18:10:40Z) - Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in
Adverse Weather [92.84066576636914]
This work addresses the challenging task of LiDAR-based 3D object detection in foggy weather.
We tackle this problem by simulating physically accurate fog into clear-weather scenes.
We are the first to provide strong 3D object detection baselines on the Seeing Through Fog dataset.
arXiv Detail & Related papers (2021-08-11T14:37:54Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - Robustness of Object Detectors in Degrading Weather Conditions [7.91378990016322]
State-of-the-art object detection systems for autonomous driving achieve promising results in clear weather conditions.
These systems need to work in degrading weather conditions, such as rain, fog and snow.
Most approaches evaluate only on the KITTI dataset, which consists only of clear weather scenes.
arXiv Detail & Related papers (2021-06-16T13:56:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.