WaterHE-NeRF: Water-ray Tracing Neural Radiance Fields for Underwater
Scene Reconstruction
- URL: http://arxiv.org/abs/2312.06946v2
- Date: Fri, 19 Jan 2024 02:08:07 GMT
- Title: WaterHE-NeRF: Water-ray Tracing Neural Radiance Fields for Underwater
Scene Reconstruction
- Authors: Jingchun Zhou and Tianyu Liang and Dehuan Zhang and Zongxin He
- Abstract summary: We develop a new water-ray tracing field by Retinex theory that precisely encodes color, density, and illuminance attenuation in three-dimensional space.
WaterHE-NeRF, through its illuminance attenuation mechanism, generates both degraded and clear multi-view images.
- Score: 6.036702530679703
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural Radiance Field (NeRF) technology demonstrates immense potential in
novel viewpoint synthesis tasks, due to its physics-based volumetric rendering
process, which is particularly promising in underwater scenes. Addressing the
limitations of existing underwater NeRF methods in handling light attenuation
caused by the water medium and the lack of real Ground Truth (GT) supervision,
this study proposes WaterHE-NeRF. We develop a new water-ray tracing field by
Retinex theory that precisely encodes color, density, and illuminance
attenuation in three-dimensional space. WaterHE-NeRF, through its illuminance
attenuation mechanism, generates both degraded and clear multi-view images and
optimizes image restoration by combining reconstruction loss with Wasserstein
distance. Additionally, the use of histogram equalization (HE) as pseudo-GT
enhances the network's accuracy in preserving original details and color
distribution. Extensive experiments on real underwater datasets and synthetic
datasets validate the effectiveness of WaterHE-NeRF. Our code will be made
publicly available.
Related papers
- Advanced Underwater Image Quality Enhancement via Hybrid Super-Resolution Convolutional Neural Networks and Multi-Scale Retinex-Based Defogging Techniques [0.0]
The research conducts extensive experiments on real-world underwater datasets to further illustrate the efficacy of the suggested approach.
In real-time underwater applications like marine exploration, underwater robotics, and autonomous underwater vehicles, the combination of deep learning and conventional image processing techniques offers a computationally efficient framework with superior results.
arXiv Detail & Related papers (2024-10-18T08:40:26Z) - Taming Latent Diffusion Model for Neural Radiance Field Inpainting [63.297262813285265]
Neural Radiance Field (NeRF) is a representation for 3D reconstruction from multi-view images.
We propose tempering the diffusion model'sity with per-scene customization and mitigating the textural shift with masked training.
Our framework yields state-of-the-art NeRF inpainting results on various real-world scenes.
arXiv Detail & Related papers (2024-04-15T17:59:57Z) - Physics-Inspired Synthesized Underwater Image Dataset [9.959844922120528]
PHISWID is a dataset tailored for enhancing underwater image processing through physics-inspired image synthesis.
Our results reveal that even a basic U-Net architecture, when trained with PHISWID, substantially outperforms existing methods in underwater image enhancement.
We intend to release PHISWID publicly, contributing a significant resource to the advancement of underwater imaging technology.
arXiv Detail & Related papers (2024-04-05T10:23:10Z) - Ternary-Type Opacity and Hybrid Odometry for RGB NeRF-SLAM [58.736472371951955]
We introduce a ternary-type opacity (TT) model, which categorizes points on a ray intersecting a surface into three regions: before, on, and behind the surface.
This enables a more accurate rendering of depth, subsequently improving the performance of image warping techniques.
Our integrated approach of TT and HO achieves state-of-the-art performance on synthetic and real-world datasets.
arXiv Detail & Related papers (2023-12-20T18:03:17Z) - Aleth-NeRF: Illumination Adaptive NeRF with Concealing Field Assumption [65.96818069005145]
We introduce the concept of a "Concealing Field," which assigns transmittance values to the surrounding air to account for illumination effects.
In dark scenarios, we assume that object emissions maintain a standard lighting level but are attenuated as they traverse the air during the rendering process.
We present a comprehensive multi-view dataset captured under challenging illumination conditions for evaluation.
arXiv Detail & Related papers (2023-12-14T16:24:09Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - WaterNeRF: Neural Radiance Fields for Underwater Scenes [6.161668246821327]
We advance state-of-the-art in neural radiance fields (NeRFs) to enable physics-informed dense depth estimation and color correction.
Our proposed method, WaterNeRF, estimates parameters of a physics-based model for underwater image formation.
We can produce novel views of degraded as well as corrected underwater images, along with dense depth of the scene.
arXiv Detail & Related papers (2022-09-27T00:53:26Z) - NeRF-SR: High-Quality Neural Radiance Fields using Super-Sampling [82.99453001445478]
We present NeRF-SR, a solution for high-resolution (HR) novel view synthesis with mostly low-resolution (LR) inputs.
Our method is built upon Neural Radiance Fields (NeRF) that predicts per-point density and color with a multi-layer perceptron.
arXiv Detail & Related papers (2021-12-03T07:33:47Z) - HybrUR: A Hybrid Physical-Neural Solution for Unsupervised Underwater
Image Restoration [18.690940762032568]
We propose a data- and physics-driven unsupervised architecture that learns underwater vision restoration from unpaired underwater-terrestrial images.
We employ the Jaffe-McGlamery degradation theory to design the generation models, and use neural networks to describe the process of underwater degradation.
Our experimental results show that the proposed method is able to perform high-quality restoration for unconstrained underwater images without any supervision.
arXiv Detail & Related papers (2021-07-06T15:00:30Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Wavelength-based Attributed Deep Neural Network for Underwater Image
Restoration [9.378355457555319]
This paper shows that attributing the right receptive field size (context) based on the traversing range of the color channel may lead to a substantial performance gain.
As a second novelty, we have incorporated an attentive skip mechanism to adaptively refine the learned multi-contextual features.
The proposed framework, called Deep WaveNet, is optimized using the traditional pixel-wise and feature-based cost functions.
arXiv Detail & Related papers (2021-06-15T06:47:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.