UW-3DGS: Underwater 3D Reconstruction with Physics-Aware Gaussian Splatting
- URL: http://arxiv.org/abs/2508.06169v1
- Date: Fri, 08 Aug 2025 09:36:32 GMT
- Title: UW-3DGS: Underwater 3D Reconstruction with Physics-Aware Gaussian Splatting
- Authors: Wenpeng Xing, Jie Chen, Zaifeng Yang, Changting Lin, Jianfeng Dong, Chaochao Chen, Xun Zhou, Meng Han,
- Abstract summary: We introduce UW-3DGS, a novel framework adapting 3D Gaussianting (3DGS) for robust underwater reconstruction.<n>Key innovations include (1) a plug-and-play learnable underwater image formation module using voxel-based regression for spatially varying attenuation and backscatter.<n>Experiments on SeaThru-NeRF and UWBundle datasets show superior performance, achieving PSNR of 27.604, SSIM of 0.868, and LPIPS of 0.104 on SeaThru-NeRF, with 65% reduction in floating artifacts.
- Score: 31.813166209083303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Underwater 3D scene reconstruction faces severe challenges from light absorption, scattering, and turbidity, which degrade geometry and color fidelity in traditional methods like Neural Radiance Fields (NeRF). While NeRF extensions such as SeaThru-NeRF incorporate physics-based models, their MLP reliance limits efficiency and spatial resolution in hazy environments. We introduce UW-3DGS, a novel framework adapting 3D Gaussian Splatting (3DGS) for robust underwater reconstruction. Key innovations include: (1) a plug-and-play learnable underwater image formation module using voxel-based regression for spatially varying attenuation and backscatter; and (2) a Physics-Aware Uncertainty Pruning (PAUP) branch that adaptively removes noisy floating Gaussians via uncertainty scoring, ensuring artifact-free geometry. The pipeline operates in training and rendering stages. During training, noisy Gaussians are optimized end-to-end with underwater parameters, guided by PAUP pruning and scattering modeling. In rendering, refined Gaussians produce clean Unattenuated Radiance Images (URIs) free from media effects, while learned physics enable realistic Underwater Images (UWIs) with accurate light transport. Experiments on SeaThru-NeRF and UWBundle datasets show superior performance, achieving PSNR of 27.604, SSIM of 0.868, and LPIPS of 0.104 on SeaThru-NeRF, with ~65% reduction in floating artifacts.
Related papers
- Enhancing Underwater Light Field Images via Global Geometry-aware Diffusion Process [93.00033672476206]
GeoDiff-LF is a novel diffusion-based framework built upon SD-Turbo to enhance underwater 4-D LF imaging.<n>By integrating diffusion priors and LF geometry, GeoDiff-LF effectively mitigates color distortion in underwater scenes.
arXiv Detail & Related papers (2026-01-29T02:27:22Z) - WaterClear-GS: Optical-Aware Gaussian Splatting for Underwater Reconstruction and Restoration [11.520966034974697]
We introduce WaterClear-GS, the first pure 3DGS-based framework that integrates underwater optical properties into Gaussian primitives.<n>Our method employs a dual-branch optimization strategy to ensure underwater photometric consistency while naturally recovering water-free appearances.<n>Experiments on standard benchmarks and our newly collected dataset demonstrate that WaterClear-GS achieves outstanding performance on both novel view synthesis (NVS) and underwater image restoration tasks.
arXiv Detail & Related papers (2026-01-27T16:14:34Z) - 3D-UIR: 3D Gaussian for Underwater 3D Scene Reconstruction via Physics Based Appearance-Medium Decoupling [30.985414238960466]
3D Gaussian Splatting (3DGS) offers real-time rendering capabilities, but struggles with underwater inhomogeneous environments.<n>We propose a physics-based framework that disentangles object appearance from water medium effects.<n>Our approach achieves both high-quality novel view synthesis and physically accurate scene restoration.
arXiv Detail & Related papers (2025-05-27T14:19:30Z) - TUGS: Physics-based Compact Representation of Underwater Scenes by Tensorized Gaussian [6.819210285113731]
Underwater Gaussian Splatting (TUGS) can effectively solve the modeling challenges of the complex interactions between object and water media.<n>Compared to other NeRF-based and GS-based methods designed for underwater, TUGS is able to render high-quality underwater images with faster rendering speeds and less memory usage.
arXiv Detail & Related papers (2025-05-12T07:09:35Z) - AquaGS: Fast Underwater Scene Reconstruction with SfM-Free Gaussian Splatting [4.0317256978754505]
We introduce AquaGS, an SfM-free underwater scene reconstruction model based on the SeaThru algorithm.<n>Our model can complete high-precision reconstruction in 30 seconds with only 3 image inputs.
arXiv Detail & Related papers (2025-05-03T12:05:57Z) - EVolSplat: Efficient Volume-based Gaussian Splatting for Urban View Synthesis [61.1662426227688]
Existing NeRF and 3DGS-based methods show promising results in achieving photorealistic renderings but require slow, per-scene optimization.<n>We introduce EVolSplat, an efficient 3D Gaussian Splatting model for urban scenes that works in a feed-forward manner.
arXiv Detail & Related papers (2025-03-26T02:47:27Z) - UW-GS: Distractor-Aware 3D Gaussian Splatting for Enhanced Underwater Scene Reconstruction [15.624536266709633]
3D Gaussian splatting (3DGS) offers the capability to achieve real-time high quality 3D scene rendering.<n>However, 3DGS assumes that the scene is in a clear medium environment and struggles to generate satisfactory representations in underwater scenes.<n>We introduce a novel Gaussian Splatting-based method, UW-GS, designed specifically for underwater applications.
arXiv Detail & Related papers (2024-10-02T13:08:56Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - Dynamic Mesh-Aware Radiance Fields [75.59025151369308]
This paper designs a two-way coupling between mesh and NeRF during rendering and simulation.
We show that a hybrid system approach outperforms alternatives in visual realism for mesh insertion.
arXiv Detail & Related papers (2023-09-08T20:18:18Z) - DehazeNeRF: Multiple Image Haze Removal and 3D Shape Reconstruction
using Neural Radiance Fields [56.30120727729177]
We introduce DehazeNeRF as a framework that robustly operates in hazy conditions.
We demonstrate successful multi-view haze removal, novel view synthesis, and 3D shape reconstruction where existing approaches fail.
arXiv Detail & Related papers (2023-03-20T18:03:32Z) - NerfDiff: Single-image View Synthesis with NeRF-guided Distillation from
3D-aware Diffusion [107.67277084886929]
Novel view synthesis from a single image requires inferring occluded regions of objects and scenes whilst simultaneously maintaining semantic and physical consistency with the input.
We propose NerfDiff, which addresses this issue by distilling the knowledge of a 3D-aware conditional diffusion model (CDM) into NeRF through synthesizing and refining a set of virtual views at test time.
We further propose a novel NeRF-guided distillation algorithm that simultaneously generates 3D consistent virtual views from the CDM samples, and finetunes the NeRF based on the improved virtual views.
arXiv Detail & Related papers (2023-02-20T17:12:00Z) - AligNeRF: High-Fidelity Neural Radiance Fields via Alignment-Aware
Training [100.33713282611448]
We conduct the first pilot study on training NeRF with high-resolution data.
We propose the corresponding solutions, including marrying the multilayer perceptron with convolutional layers.
Our approach is nearly free without introducing obvious training/testing costs.
arXiv Detail & Related papers (2022-11-17T17:22:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.