WeatherGS: 3D Scene Reconstruction in Adverse Weather Conditions via Gaussian Splatting
- URL: http://arxiv.org/abs/2412.18862v3
- Date: Wed, 12 Feb 2025 03:13:48 GMT
- Title: WeatherGS: 3D Scene Reconstruction in Adverse Weather Conditions via Gaussian Splatting
- Authors: Chenghao Qian, Yuhu Guo, Wenjing Li, Gustav Markkula,
- Abstract summary: WeatherGS is a 3DGS-based framework for reconstructing clear scenes from multi-view images under different weather conditions.<n>We propose a dense-to-sparse preprocess strategy, which sequentially removes the dense particles by an Atmospheric Effect Filter.<n>Finally, we train a set of 3D Gaussians by the processed images and generated masks for excluding occluded areas.
- Score: 5.240297013713328
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D Gaussian Splatting (3DGS) has gained significant attention for 3D scene reconstruction, but still suffers from complex outdoor environments, especially under adverse weather. This is because 3DGS treats the artifacts caused by adverse weather as part of the scene and will directly reconstruct them, largely reducing the clarity of the reconstructed scene. To address this challenge, we propose WeatherGS, a 3DGS-based framework for reconstructing clear scenes from multi-view images under different weather conditions. Specifically, we explicitly categorize the multi-weather artifacts into the dense particles and lens occlusions that have very different characters, in which the former are caused by snowflakes and raindrops in the air, and the latter are raised by the precipitation on the camera lens. In light of this, we propose a dense-to-sparse preprocess strategy, which sequentially removes the dense particles by an Atmospheric Effect Filter (AEF) and then extracts the relatively sparse occlusion masks with a Lens Effect Detector (LED). Finally, we train a set of 3D Gaussians by the processed images and generated masks for excluding occluded areas, and accurately recover the underlying clear scene by Gaussian splatting. We conduct a diverse and challenging benchmark to facilitate the evaluation of 3D reconstruction under complex weather scenarios. Extensive experiments on this benchmark demonstrate that our WeatherGS consistently produces high-quality, clean scenes across various weather scenarios, outperforming existing state-of-the-art methods. See project page:https://jumponthemoon.github.io/weather-gs.
Related papers
- Dark-EvGS: Event Camera as an Eye for Radiance Field in the Dark [51.68144172958247]
We propose Dark-EvGS, the first event-assisted 3D GS framework that enables the reconstruction of bright frames from arbitrary viewpoints.<n>Our method achieves better results than existing methods, conquering radiance field reconstruction under challenging low-light conditions.
arXiv Detail & Related papers (2025-07-16T05:54:33Z) - WeatherEdit: Controllable Weather Editing with 4D Gaussian Field [5.240297013713328]
We present WeatherEdit, a novel weather editing pipeline for generating realistic weather effects in 3D scenes.<n>Our approach is structured into two key components: weather background editing and weather particle construction.<n>Experiments on multiple driving datasets demonstrate that WeatherEdit can generate diverse weather effects with controllable condition severity.
arXiv Detail & Related papers (2025-05-26T19:10:47Z) - DehazeGS: Seeing Through Fog with 3D Gaussian Splatting [17.119969983512533]
We introduce DehazeGS, a method capable of decomposing and rendering a fog-free background from participating media.
Experiments on both synthetic and real-world foggy datasets demonstrate that DehazeGS achieves state-of-the-art performance.
arXiv Detail & Related papers (2025-01-07T09:47:46Z) - Gaussian Splashing: Direct Volumetric Rendering Underwater [6.2122699483618]
We present a new method that takes only a few minutes for reconstruction and renders novel underwater scenes at 140 FPS.<n>Named Gaussian Splashing, our method unifies the strengths and speed of 3DGS with an image formation model for capturing scattering.<n>It reveals distant scene details with far greater clarity than other methods, dramatically improving reconstructed and rendered images.
arXiv Detail & Related papers (2024-11-29T10:04:38Z) - Gaussian Scenes: Pose-Free Sparse-View Scene Reconstruction using Depth-Enhanced Diffusion Priors [5.407319151576265]
We introduce a generative approach for pose-free (without camera parameters) reconstruction of 360 scenes from a sparse set of 2D images.
We propose an image-to-image generative model designed to inpaint missing details and remove artifacts in novel view renders and depth maps of a 3D scene.
arXiv Detail & Related papers (2024-11-24T19:34:58Z) - EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [76.02450110026747]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - DeRainGS: Gaussian Splatting for Enhanced Scene Reconstruction in Rainy Environments [4.86090922870914]
This study introduces the novel task of 3D Reconstruction in Rainy Environments (3DRRE)
To benchmark this task, we construct the HydroViews dataset that comprises a diverse collection of both synthesized and real-world scene images.
We propose DeRainGS, the first 3DGS method tailored for reconstruction in adverse rainy environments.
arXiv Detail & Related papers (2024-08-21T11:39:18Z) - Event3DGS: Event-Based 3D Gaussian Splatting for High-Speed Robot Egomotion [54.197343533492486]
Event3DGS can reconstruct high-fidelity 3D structure and appearance under high-speed egomotion.
Experiments on multiple synthetic and real-world datasets demonstrate the superiority of Event3DGS compared with existing event-based dense 3D scene reconstruction frameworks.
Our framework also allows one to incorporate a few motion-blurred frame-based measurements into the reconstruction process to further improve appearance fidelity without loss of structural accuracy.
arXiv Detail & Related papers (2024-06-05T06:06:03Z) - EvaGaussians: Event Stream Assisted Gaussian Splatting from Blurry Images [36.91327728871551]
3D Gaussian Splatting (3D-GS) has demonstrated exceptional capabilities in 3D scene reconstruction and novel view synthesis.<n>We introduce Event Stream Assisted Gaussian Splatting (EvaGaussians), a novel approach that integrates event streams captured by an event camera to assist in reconstructing high-quality 3D-GS from blurry images.
arXiv Detail & Related papers (2024-05-29T04:59:27Z) - Sp2360: Sparse-view 360 Scene Reconstruction using Cascaded 2D Diffusion Priors [51.36238367193988]
We tackle sparse-view reconstruction of a 360 3D scene using priors from latent diffusion models (LDM)
We present SparseSplat360, a method that employs a cascade of in-painting and artifact removal models to fill in missing details and clean novel views.
Our method generates entire 360 scenes from as few as 9 input views, with a high degree of foreground and background detail.
arXiv Detail & Related papers (2024-05-26T11:01:39Z) - VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction [59.40711222096875]
We present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting.
Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets.
arXiv Detail & Related papers (2024-02-27T11:40:50Z) - ScatterNeRF: Seeing Through Fog with Physically-Based Inverse Neural
Rendering [83.75284107397003]
We introduce ScatterNeRF, a neural rendering method which renders scenes and decomposes the fog-free background.
We propose a disentangled representation for the scattering volume and the scene objects, and learn the scene reconstruction with physics-inspired losses.
We validate our method by capturing multi-view In-the-Wild data and controlled captures in a large-scale fog chamber.
arXiv Detail & Related papers (2023-05-03T13:24:06Z) - GAUDI: A Neural Architect for Immersive 3D Scene Generation [67.97817314857917]
GAUDI is a generative model capable of capturing the distribution of complex and realistic 3D scenes that can be rendered immersively from a moving camera.
We show that GAUDI obtains state-of-the-art performance in the unconditional generative setting across multiple datasets.
arXiv Detail & Related papers (2022-07-27T19:10:32Z) - Urban Radiance Fields [77.43604458481637]
We perform 3D reconstruction and novel view synthesis from data captured by scanning platforms commonly deployed for world mapping in urban outdoor environments.
Our approach extends Neural Radiance Fields, which has been demonstrated to synthesize realistic novel images for small scenes in controlled settings.
Each of these three extensions provides significant performance improvements in experiments on Street View data.
arXiv Detail & Related papers (2021-11-29T15:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.