Underwater Light Field Retention : Neural Rendering for Underwater
Imaging
- URL: http://arxiv.org/abs/2203.11006v1
- Date: Mon, 21 Mar 2022 14:22:05 GMT
- Title: Underwater Light Field Retention : Neural Rendering for Underwater
Imaging
- Authors: Tian Ye and Sixiang Chen and Yun Liu and Erkang Chen and Yi Ye and
Yuche Li
- Abstract summary: Underwater Image Rendering aims to generate a true-to-life underwater image from a given clean one.
We propose a neural rendering method for underwater imaging, dubbed UWNR (Underwater Neural Rendering).
- Score: 6.22867695581195
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Underwater Image Rendering aims to generate a true-to-life underwater image
from a given clean one, which could be applied to various practical
applications such as underwater image enhancement, camera filter, and virtual
gaming. We explore two less-touched but challenging problems in underwater
image rendering, namely, i) how to render diverse underwater scenes by a single
neural network? ii) how to adaptively learn the underwater light fields from
natural exemplars, \textit{i,e.}, realistic underwater images? To this end, we
propose a neural rendering method for underwater imaging, dubbed UWNR
(Underwater Neural Rendering). Specifically, UWNR is a data-driven neural
network that implicitly learns the natural degenerated model from authentic
underwater images, avoiding introducing erroneous biases by hand-craft imaging
models.
Compared with existing underwater image generation methods, UWNR utilizes the
natural light field to simulate the main characteristics of the underwater
scene. Thus, it is able to synthesize a wide variety of underwater images from
one clean image with various realistic underwater images.
Extensive experiments demonstrate that our approach achieves better visual
effects and quantitative metrics over previous methods. Moreover, we adopt UWNR
to build an open Large Neural Rendering Underwater Dataset containing various
types of water quality, dubbed LNRUD.
Related papers
- Aquatic-GS: A Hybrid 3D Representation for Underwater Scenes [6.549998173302729]
We propose Aquatic-GS, a hybrid 3D representation approach for underwater scenes that effectively represents both the objects and the water medium.
Specifically, we construct a Neural Water Field (NWF) to implicitly model the water parameters, while extending the latest 3D Gaussian Splatting (3DGS) to model the objects explicitly.
Both components are integrated through a physics-based underwater image formation model to represent complex underwater scenes.
arXiv Detail & Related papers (2024-10-31T22:24:56Z) - UW-SDF: Exploiting Hybrid Geometric Priors for Neural SDF Reconstruction from Underwater Multi-view Monocular Images [63.32490897641344]
We propose a framework for reconstructing target objects from multi-view underwater images based on neural SDF.
We introduce hybrid geometric priors to optimize the reconstruction process, markedly enhancing the quality and efficiency of neural SDF reconstruction.
arXiv Detail & Related papers (2024-10-10T16:33:56Z) - Enhancing Underwater Imaging with 4-D Light Fields: Dataset and Method [77.80712860663886]
4-D light fields (LFs) enhance underwater imaging plagued by light absorption, scattering, and other challenges.
We propose a progressive framework for underwater 4-D LF image enhancement and depth estimation.
We construct the first 4-D LF-based underwater image dataset for quantitative evaluation and supervised training of learning-based methods.
arXiv Detail & Related papers (2024-08-30T15:06:45Z) - Diving into Underwater: Segment Anything Model Guided Underwater Salient Instance Segmentation and A Large-scale Dataset [60.14089302022989]
Underwater vision tasks often suffer from low segmentation accuracy due to the complex underwater circumstances.
We construct the first large-scale underwater salient instance segmentation dataset (USIS10K)
We propose an Underwater Salient Instance architecture based on Segment Anything Model (USIS-SAM) specifically for the underwater domain.
arXiv Detail & Related papers (2024-06-10T06:17:33Z) - Physics-Inspired Synthesized Underwater Image Dataset [9.959844922120528]
PHISWID is a dataset tailored for enhancing underwater image processing through physics-inspired image synthesis.
Our results reveal that even a basic U-Net architecture, when trained with PHISWID, substantially outperforms existing methods in underwater image enhancement.
We intend to release PHISWID publicly, contributing a significant resource to the advancement of underwater imaging technology.
arXiv Detail & Related papers (2024-04-05T10:23:10Z) - Physics Informed and Data Driven Simulation of Underwater Images via
Residual Learning [5.095097384893417]
In general, underwater images suffer from color distortion and low contrast, because light is attenuated and backscattered as it propagates through water.
An existing simple degradation model (similar to atmospheric image "hazing" effects) is not sufficient to properly represent the underwater image degradation.
We propose a deep learning-based architecture to automatically simulate the underwater effects.
arXiv Detail & Related papers (2024-02-07T21:53:28Z) - Atlantis: Enabling Underwater Depth Estimation with Stable Diffusion [30.122666238416716]
We propose a novel pipeline for generating underwater images using accurate terrestrial depth data.
This approach facilitates the training of supervised models for underwater depth estimation.
We introduce a unique Depth2Underwater ControlNet, trained on specially prepared Underwater, Depth, Text data triplets.
arXiv Detail & Related papers (2023-12-19T08:56:33Z) - A deep learning approach for marine snow synthesis and removal [55.86191108738564]
This paper proposes a novel method to reduce the marine snow interference using deep learning techniques.
We first synthesize realistic marine snow samples by training a Generative Adversarial Network (GAN) model.
We then train a U-Net model to perform marine snow removal as an image to image translation task.
arXiv Detail & Related papers (2023-11-27T07:19:41Z) - Medium Transmission Map Matters for Learning to Restore Real-World
Underwater Images [3.0980025155565376]
We introduce the media transmission map as guidance to assist in image enhancement.
The proposed method can achieve advanced results of 22.6 dB on the challenging Test-R90 with an impressive 30 times faster than the existing models.
arXiv Detail & Related papers (2022-03-17T16:13:52Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Generating Physically-Consistent Satellite Imagery for Climate Visualizations [53.61991820941501]
We train a generative adversarial network to create synthetic satellite imagery of future flooding and reforestation events.
A pure deep learning-based model can generate flood visualizations but hallucinates floods at locations that were not susceptible to flooding.
We publish our code and dataset for segmentation guided image-to-image translation in Earth observation.
arXiv Detail & Related papers (2021-04-10T15:00:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.