REHEARSE-3D: A Multi-modal Emulated Rain Dataset for 3D Point Cloud De-raining
- URL: http://arxiv.org/abs/2504.21699v1
- Date: Wed, 30 Apr 2025 14:43:38 GMT
- Title: REHEARSE-3D: A Multi-modal Emulated Rain Dataset for 3D Point Cloud De-raining
- Authors: Abu Mohammed Raisuddin, Jesper Holmblad, Hamed Haghighi, Yuri Poledna, Maikol Funk Drechsler, Valentina Donzella, Eren Erdal Aksoy,
- Abstract summary: We release a new, large-scale, multi-modal emulated rain dataset, REHEARSE-3D, to promote research advancements in 3D point cloud de-raining.<n>First, it is the largest point-wise annotated dataset, and second, it is the only one with high-resolution LiDAR data enriched with 4D Radar point clouds.<n>We benchmark raindrop detection and removal in fused LiDAR and 4D Radar point clouds.
- Score: 0.5668912212306543
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sensor degradation poses a significant challenge in autonomous driving. During heavy rainfall, the interference from raindrops can adversely affect the quality of LiDAR point clouds, resulting in, for instance, inaccurate point measurements. This, in turn, can potentially lead to safety concerns if autonomous driving systems are not weather-aware, i.e., if they are unable to discern such changes. In this study, we release a new, large-scale, multi-modal emulated rain dataset, REHEARSE-3D, to promote research advancements in 3D point cloud de-raining. Distinct from the most relevant competitors, our dataset is unique in several respects. First, it is the largest point-wise annotated dataset, and second, it is the only one with high-resolution LiDAR data (LiDAR-256) enriched with 4D Radar point clouds logged in both daytime and nighttime conditions in a controlled weather environment. Furthermore, REHEARSE-3D involves rain-characteristic information, which is of significant value not only for sensor noise modeling but also for analyzing the impact of weather at a point level. Leveraging REHEARSE-3D, we benchmark raindrop detection and removal in fused LiDAR and 4D Radar point clouds. Our comprehensive study further evaluates the performance of various statistical and deep-learning models. Upon publication, the dataset and benchmark models will be made publicly available at: https://sporsho.github.io/REHEARSE3D.
Related papers
- RobuRCDet: Enhancing Robustness of Radar-Camera Fusion in Bird's Eye View for 3D Object Detection [68.99784784185019]
Poor lighting or adverse weather conditions degrade camera performance.
Radar suffers from noise and positional ambiguity.
We propose RobuRCDet, a robust object detection model in BEV.
arXiv Detail & Related papers (2025-02-18T17:17:38Z) - Robust Single Object Tracking in LiDAR Point Clouds under Adverse Weather Conditions [4.133835011820212]
3D single object tracking in LiDAR point clouds is a critical task for outdoor perception.<n>Despite the impressive performance of current 3DSOT methods, evaluating them on clean datasets inadequately reflects their comprehensive performance.<n>One of the main obstacles is the lack of adverse weather benchmarks for the evaluation of 3DSOT.
arXiv Detail & Related papers (2025-01-13T08:44:35Z) - TripleMixer: A 3D Point Cloud Denoising Model for Adverse Weather [6.752848431431841]
Real-world adverse weather conditions, such as rain, fog, and snow, introduce significant noise and interference.
Existing datasets often suffer from limited weather diversity and small dataset sizes.
We propose a novel point cloud denoising model, TripleMixer, comprising three mixer layers.
arXiv Detail & Related papers (2024-08-25T10:45:52Z) - Sunshine to Rainstorm: Cross-Weather Knowledge Distillation for Robust
3D Object Detection [26.278415287992964]
Previous research has attempted to address this by simulating the noise from rain to improve the robustness of detection models.
We propose a novel rain simulation method, termed DRET, that unifies Dynamics and Rainy Environment Theory.
We also present a Sunny-to-Rainy Knowledge Distillation approach to enhance 3D detection under rainy conditions.
arXiv Detail & Related papers (2024-02-28T17:21:02Z) - Challenges of YOLO Series for Object Detection in Extremely Heavy Rain:
CALRA Simulator based Synthetic Evaluation Dataset [0.0]
Object detection by diverse sensors (e.g., LiDAR, radar, and camera) should be prioritized for autonomous vehicles.
These sensors require to detect objects accurately and quickly in diverse weather conditions, but they tend to have challenges to consistently detect objects in bad weather conditions with rain, snow, or fog.
In this study, based on experimentally obtained raindrop data from precipitation conditions, we constructed a novel dataset that could test diverse network model in various precipitation conditions.
arXiv Detail & Related papers (2023-12-13T08:45:57Z) - Ithaca365: Dataset and Driving Perception under Repeated and Challenging
Weather Conditions [0.0]
We present a new dataset to enable robust autonomous driving via a novel data collection process.
The dataset includes images and point clouds from cameras and LiDAR sensors, along with high-precision GPS/INS.
We demonstrate the uniqueness of this dataset by analyzing the performance of baselines in amodal segmentation of road and objects.
arXiv Detail & Related papers (2022-08-01T22:55:32Z) - LiDAR Snowfall Simulation for Robust 3D Object Detection [116.10039516404743]
We propose a physically based method to simulate the effect of snowfall on real clear-weather LiDAR point clouds.
Our method samples snow particles in 2D space for each LiDAR line and uses the induced geometry to modify the measurement for each LiDAR beam.
We use our simulation to generate partially synthetic snowy LiDAR data and leverage these data for training 3D object detection models that are robust to snowfall.
arXiv Detail & Related papers (2022-03-28T21:48:26Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in
Adverse Weather [92.84066576636914]
This work addresses the challenging task of LiDAR-based 3D object detection in foggy weather.
We tackle this problem by simulating physically accurate fog into clear-weather scenes.
We are the first to provide strong 3D object detection baselines on the Seeing Through Fog dataset.
arXiv Detail & Related papers (2021-08-11T14:37:54Z) - Lidar Light Scattering Augmentation (LISA): Physics-based Simulation of
Adverse Weather Conditions for 3D Object Detection [60.89616629421904]
Lidar-based object detectors are critical parts of the 3D perception pipeline in autonomous navigation systems such as self-driving cars.
They are sensitive to adverse weather conditions such as rain, snow and fog due to reduced signal-to-noise ratio (SNR) and signal-to-background ratio (SBR)
arXiv Detail & Related papers (2021-07-14T21:10:47Z) - PC-DAN: Point Cloud based Deep Affinity Network for 3D Multi-Object
Tracking (Accepted as an extended abstract in JRDB-ACT Workshop at CVPR21) [68.12101204123422]
A point cloud is a dense compilation of spatial data in 3D coordinates.
We propose a PointNet-based approach for 3D Multi-Object Tracking (MOT)
arXiv Detail & Related papers (2021-06-03T05:36:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.