Raindrop GS: A Benchmark for 3D Gaussian Splatting under Raindrop Conditions
- URL: http://arxiv.org/abs/2510.17719v1
- Date: Mon, 20 Oct 2025 16:36:15 GMT
- Title: Raindrop GS: A Benchmark for 3D Gaussian Splatting under Raindrop Conditions
- Authors: Zhiqiang Teng, Beibei Lin, Tingting Chen, Zifeng Yuan, Xuanyi Li, Xuanyu Zhang, Shunli Zhang,
- Abstract summary: RaindropGS is a benchmark designed to evaluate the full 3DGS pipeline-from unconstrained, raindrop-corrupted images to clear 3DGS reconstructions.<n>First, we collect a real-world raindrop reconstruction dataset, in which each scene contains three aligned image sets.<n>Through comprehensive experiments and analyses, we reveal critical insights into the performance limitations of existing 3DGS methods on unconstrained raindrop images.
- Score: 23.642500549050027
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Gaussian Splatting (3DGS) under raindrop conditions suffers from severe occlusions and optical distortions caused by raindrop contamination on the camera lens, substantially degrading reconstruction quality. Existing benchmarks typically evaluate 3DGS using synthetic raindrop images with known camera poses (constrained images), assuming ideal conditions. However, in real-world scenarios, raindrops often interfere with accurate camera pose estimation and point cloud initialization. Moreover, a significant domain gap between synthetic and real raindrops further impairs generalization. To tackle these issues, we introduce RaindropGS, a comprehensive benchmark designed to evaluate the full 3DGS pipeline-from unconstrained, raindrop-corrupted images to clear 3DGS reconstructions. Specifically, the whole benchmark pipeline consists of three parts: data preparation, data processing, and raindrop-aware 3DGS evaluation, including types of raindrop interference, camera pose estimation and point cloud initialization, single image rain removal comparison, and 3D Gaussian training comparison. First, we collect a real-world raindrop reconstruction dataset, in which each scene contains three aligned image sets: raindrop-focused, background-focused, and rain-free ground truth, enabling a comprehensive evaluation of reconstruction quality under different focus conditions. Through comprehensive experiments and analyses, we reveal critical insights into the performance limitations of existing 3DGS methods on unconstrained raindrop images and the varying impact of different pipeline components: the impact of camera focus position on 3DGS reconstruction performance, and the interference caused by inaccurate pose and point cloud initialization on reconstruction. These insights establish clear directions for developing more robust 3DGS methods under raindrop conditions.
Related papers
- Rethinking Rainy 3D Scene Reconstruction via Perspective Transforming and Brightness Tuning [8.552623283033695]
Existing datasets often overlook two critical characteristics of real rainy 3D scenes.<n>We construct a new dataset named OmniRain3D that incorporates perspective heterogeneity and brightness dynamicity.<n>We propose an end-to-end reconstruction framework named REVR-GSNet, achieving high-fidelity reconstruction of clean 3D scenes from rain-degraded inputs.
arXiv Detail & Related papers (2025-11-10T05:57:55Z) - WeatherGS: 3D Scene Reconstruction in Adverse Weather Conditions via Gaussian Splatting [5.240297013713328]
WeatherGS is a 3DGS-based framework for reconstructing clear scenes from multi-view images under different weather conditions.<n>We propose a dense-to-sparse preprocess strategy, which sequentially removes the dense particles by an Atmospheric Effect Filter.<n>Finally, we train a set of 3D Gaussians by the processed images and generated masks for excluding occluded areas.
arXiv Detail & Related papers (2024-12-25T10:16:57Z) - PF3plat: Pose-Free Feed-Forward 3D Gaussian Splatting [54.7468067660037]
PF3plat sets a new state-of-the-art across all benchmarks, supported by comprehensive ablation studies validating our design choices.<n>Our framework capitalizes on fast speed, scalability, and high-quality 3D reconstruction and view synthesis capabilities of 3DGS.
arXiv Detail & Related papers (2024-10-29T15:28:15Z) - DeRainGS: Gaussian Splatting for Enhanced Scene Reconstruction in Rainy Environments [4.86090922870914]
This study introduces the novel task of 3D Reconstruction in Rainy Environments (3DRRE)
To benchmark this task, we construct the HydroViews dataset that comprises a diverse collection of both synthesized and real-world scene images.
We propose DeRainGS, the first 3DGS method tailored for reconstruction in adverse rainy environments.
arXiv Detail & Related papers (2024-08-21T11:39:18Z) - RainyScape: Unsupervised Rainy Scene Reconstruction using Decoupled Neural Rendering [50.14860376758962]
We propose RainyScape, an unsupervised framework for reconstructing clean scenes from a collection of multi-view rainy images.
Based on the spectral bias property of neural networks, we first optimize the neural rendering pipeline to obtain a low-frequency scene representation.
We jointly optimize the two modules, driven by the proposed adaptive direction-sensitive gradient-based reconstruction loss.
arXiv Detail & Related papers (2024-04-17T14:07:22Z) - Sunshine to Rainstorm: Cross-Weather Knowledge Distillation for Robust
3D Object Detection [26.278415287992964]
Previous research has attempted to address this by simulating the noise from rain to improve the robustness of detection models.
We propose a novel rain simulation method, termed DRET, that unifies Dynamics and Rainy Environment Theory.
We also present a Sunny-to-Rainy Knowledge Distillation approach to enhance 3D detection under rainy conditions.
arXiv Detail & Related papers (2024-02-28T17:21:02Z) - MonoTDP: Twin Depth Perception for Monocular 3D Object Detection in
Adverse Scenes [49.21187418886508]
This paper proposes a monocular 3D detection model designed to perceive twin depth in adverse scenes, termed MonoTDP.
We first introduce an adaptive learning strategy to aid the model in handling uncontrollable weather conditions, significantly resisting degradation caused by various degrading factors.
Then, to address the depth/content loss in adverse regions, we propose a novel twin depth perception module that simultaneously estimates scene and object depth.
arXiv Detail & Related papers (2023-05-18T13:42:02Z) - Semi-MoreGAN: A New Semi-supervised Generative Adversarial Network for
Mixture of Rain Removal [18.04268933542476]
We propose a new SEMI-supervised Mixture Of rain REmoval Generative Adversarial Network (Semi-MoreGAN)
Semi-MoreGAN consists of four key modules: (I) a novel attentional depth prediction network to provide precise depth estimation; (ii) a context feature prediction network composed of several well-designed detailed residual blocks to produce detailed image context features; (iii) a pyramid depth-guided non-local network to effectively integrate the image context with the depth information, and produce the final rain-free images; and (iv) a comprehensive semi-supervised loss function to make the model not limited
arXiv Detail & Related papers (2022-04-28T11:35:26Z) - RCDNet: An Interpretable Rain Convolutional Dictionary Network for
Single Image Deraining [49.99207211126791]
We specifically build a novel deep architecture, called rain convolutional dictionary network (RCDNet)
RCDNet embeds the intrinsic priors of rain streaks and has clear interpretability.
By end-to-end training such an interpretable network, all involved rain kernels and proximal operators can be automatically extracted.
arXiv Detail & Related papers (2021-07-14T16:08:11Z) - From Rain Generation to Rain Removal [67.71728610434698]
We build a full Bayesian generative model for rainy image where the rain layer is parameterized as a generator.
We employ the variational inference framework to approximate the expected statistical distribution of rainy image.
Comprehensive experiments substantiate that the proposed model can faithfully extract the complex rain distribution.
arXiv Detail & Related papers (2020-08-08T18:56:51Z) - Structural Residual Learning for Single Image Rain Removal [48.87977695398587]
This study proposes a new network architecture by enforcing the output residual of the network possess intrinsic rain structures.
Such a structural residual setting guarantees the rain layer extracted by the network finely comply with the prior knowledge of general rain streaks.
arXiv Detail & Related papers (2020-05-19T05:52:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.