DerainNeRF: 3D Scene Estimation with Adhesive Waterdrop Removal
- URL: http://arxiv.org/abs/2403.20013v1
- Date: Fri, 29 Mar 2024 06:58:57 GMT
- Title: DerainNeRF: 3D Scene Estimation with Adhesive Waterdrop Removal
- Authors: Yunhao Li, Jing Wu, Lingzhe Zhao, Peidong Liu,
- Abstract summary: We propose a method to reconstruct the clear 3D scene implicitly from multi-view images degraded by waterdrops.
Our method exploits an attention network to predict the location of waterdrops and then train a Neural Radiance Fields to recover the 3D scene implicitly.
By leveraging the strong scene representation capabilities of NeRF, our method can render high-quality novel-view images with waterdrops removed.
- Score: 12.099886168325012
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When capturing images through the glass during rainy or snowy weather conditions, the resulting images often contain waterdrops adhered on the glass surface, and these waterdrops significantly degrade the image quality and performance of many computer vision algorithms. To tackle these limitations, we propose a method to reconstruct the clear 3D scene implicitly from multi-view images degraded by waterdrops. Our method exploits an attention network to predict the location of waterdrops and then train a Neural Radiance Fields to recover the 3D scene implicitly. By leveraging the strong scene representation capabilities of NeRF, our method can render high-quality novel-view images with waterdrops removed. Extensive experimental results on both synthetic and real datasets show that our method is able to generate clear 3D scenes and outperforms existing state-of-the-art (SOTA) image adhesive waterdrop removal methods.
Related papers
- SeaSplat: Representing Underwater Scenes with 3D Gaussian Splatting and a Physically Grounded Image Formation Model [11.57677379828992]
We introduce SeaSplat, a method to enable real-time rendering of underwater scenes leveraging recent advances in 3D radiance fields.
Applying SeaSplat to the real-world scenes from SeaThru-NeRF dataset, a scene collected by an underwater vehicle in the US Virgin Islands.
We show that the underwater image formation helps learn scene structure, with better depth maps, as well as show that our improvements maintain the significant computational improvements afforded by leveraging a 3D Gaussian representation.
arXiv Detail & Related papers (2024-09-25T20:45:19Z) - Denoising Diffusion via Image-Based Rendering [54.20828696348574]
We introduce the first diffusion model able to perform fast, detailed reconstruction and generation of real-world 3D scenes.
First, we introduce a new neural scene representation, IB-planes, that can efficiently and accurately represent large 3D scenes.
Second, we propose a denoising-diffusion framework to learn a prior over this novel 3D scene representation, using only 2D images.
arXiv Detail & Related papers (2024-02-05T19:00:45Z) - Sat2Scene: 3D Urban Scene Generation from Satellite Images with Diffusion [77.34078223594686]
We propose a novel architecture for direct 3D scene generation by introducing diffusion models into 3D sparse representations and combining them with neural rendering techniques.
Specifically, our approach generates texture colors at the point level for a given geometry using a 3D diffusion model first, which is then transformed into a scene representation in a feed-forward manner.
Experiments in two city-scale datasets show that our model demonstrates proficiency in generating photo-realistic street-view image sequences and cross-view urban scenes from satellite imagery.
arXiv Detail & Related papers (2024-01-19T16:15:37Z) - WaterHE-NeRF: Water-ray Tracing Neural Radiance Fields for Underwater
Scene Reconstruction [6.036702530679703]
We develop a new water-ray tracing field by Retinex theory that precisely encodes color, density, and illuminance attenuation in three-dimensional space.
WaterHE-NeRF, through its illuminance attenuation mechanism, generates both degraded and clear multi-view images.
arXiv Detail & Related papers (2023-12-12T02:55:14Z) - Video Waterdrop Removal via Spatio-Temporal Fusion in Driving Scenes [53.16726447796844]
The waterdrops on windshields during driving can cause severe visual obstructions, which may lead to car accidents.
We propose an attention-based framework that fuses the representations from multiple frames to restore visual information occluded by waterdrops.
arXiv Detail & Related papers (2023-02-12T13:47:26Z) - Water Simulation and Rendering from a Still Photograph [20.631819299595527]
We propose an approach to simulate and render realistic water animation from a single still input photograph.
Our approach creates realistic results with no user intervention for a wide variety of natural scenes.
arXiv Detail & Related papers (2022-10-05T20:47:44Z) - PDRF: Progressively Deblurring Radiance Field for Fast and Robust Scene
Reconstruction from Blurry Images [75.87721926918874]
We present Progressively Deblurring Radiance Field (PDRF)
PDRF is a novel approach to efficiently reconstruct high quality radiance fields from blurry images.
We show that PDRF is 15X faster than previous State-of-The-Art scene reconstruction methods.
arXiv Detail & Related papers (2022-08-17T03:42:29Z) - DeepFaceFlow: In-the-wild Dense 3D Facial Motion Estimation [56.56575063461169]
DeepFaceFlow is a robust, fast, and highly-accurate framework for the estimation of 3D non-rigid facial flow.
Our framework was trained and tested on two very large-scale facial video datasets.
Given registered pairs of images, our framework generates 3D flow maps at 60 fps.
arXiv Detail & Related papers (2020-05-14T23:56:48Z) - 3D Photography using Context-aware Layered Depth Inpainting [50.66235795163143]
We propose a method for converting a single RGB-D input image into a 3D photo.
A learning-based inpainting model synthesizes new local color-and-depth content into the occluded region.
The resulting 3D photos can be efficiently rendered with motion parallax.
arXiv Detail & Related papers (2020-04-09T17:59:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.