Deep Sea Robotic Imaging Simulator
- URL: http://arxiv.org/abs/2006.15398v3
- Date: Tue, 23 Feb 2021 12:07:18 GMT
- Title: Deep Sea Robotic Imaging Simulator
- Authors: Yifan Song, David Nakath, Mengkun She, Furkan Elibol and Kevin K\"oser
- Abstract summary: The largest portion of the ocean - the deep sea - still remains mostly unexplored.
Deep sea images are very different from the images taken in shallow waters and this area did not get much attention from the community.
This paper presents a physical model-based image simulation solution, which uses an in-air texture and depth information as inputs.
- Score: 6.2122699483618
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays underwater vision systems are being widely applied in ocean
research. However, the largest portion of the ocean - the deep sea - still
remains mostly unexplored. Only relatively few image sets have been taken from
the deep sea due to the physical limitations caused by technical challenges and
enormous costs. Deep sea images are very different from the images taken in
shallow waters and this area did not get much attention from the community. The
shortage of deep sea images and the corresponding ground truth data for
evaluation and training is becoming a bottleneck for the development of
underwater computer vision methods. Thus, this paper presents a physical
model-based image simulation solution, which uses an in-air texture and depth
information as inputs, to generate underwater image sequences taken by robots
in deep ocean scenarios. Different from shallow water conditions, artificial
illumination plays a vital role in deep sea image formation as it strongly
affects the scene appearance. Our radiometric image formation model considers
both attenuation and scattering effects with co-moving spotlights in the dark.
By detailed analysis and evaluation of the underwater image formation model, we
propose a 3D lookup table structure in combination with a novel rendering
strategy to improve simulation performance. This enables us to integrate an
interactive deep sea robotic vision simulation in the Unmanned Underwater
Vehicles simulator. To inspire further deep sea vision research by the
community, we will release the source code of our deep sea image converter to
the public.
Related papers
- UW-SDF: Exploiting Hybrid Geometric Priors for Neural SDF Reconstruction from Underwater Multi-view Monocular Images [63.32490897641344]
We propose a framework for reconstructing target objects from multi-view underwater images based on neural SDF.
We introduce hybrid geometric priors to optimize the reconstruction process, markedly enhancing the quality and efficiency of neural SDF reconstruction.
arXiv Detail & Related papers (2024-10-10T16:33:56Z) - Enhancing Underwater Imaging with 4-D Light Fields: Dataset and Method [77.80712860663886]
4-D light fields (LFs) enhance underwater imaging plagued by light absorption, scattering, and other challenges.
We propose a progressive framework for underwater 4-D LF image enhancement and depth estimation.
We construct the first 4-D LF-based underwater image dataset for quantitative evaluation and supervised training of learning-based methods.
arXiv Detail & Related papers (2024-08-30T15:06:45Z) - WaterMono: Teacher-Guided Anomaly Masking and Enhancement Boosting for Robust Underwater Self-Supervised Monocular Depth Estimation [4.909989222186828]
We propose WaterMono, a novel framework for depth estimation and image enhancement.
It incorporates the following key measures: (1) We present a Teacher-Guided Anomaly Mask to identify dynamic regions within the images; (2) We employ depth information combined with the Underwater Image Formation Model to generate enhanced images, which in turn contribute to the depth estimation task; and (3) We utilize a rotated distillation strategy to enhance the model's rotational robustness.
arXiv Detail & Related papers (2024-06-19T08:49:45Z) - Physics-Inspired Synthesized Underwater Image Dataset [9.959844922120528]
PHISWID is a dataset tailored for enhancing underwater image processing through physics-inspired image synthesis.
Our results reveal that even a basic U-Net architecture, when trained with PHISWID, substantially outperforms existing methods in underwater image enhancement.
We intend to release PHISWID publicly, contributing a significant resource to the advancement of underwater imaging technology.
arXiv Detail & Related papers (2024-04-05T10:23:10Z) - Atlantis: Enabling Underwater Depth Estimation with Stable Diffusion [30.122666238416716]
We propose a novel pipeline for generating underwater images using accurate terrestrial depth data.
This approach facilitates the training of supervised models for underwater depth estimation.
We introduce a unique Depth2Underwater ControlNet, trained on specially prepared Underwater, Depth, Text data triplets.
arXiv Detail & Related papers (2023-12-19T08:56:33Z) - A deep learning approach for marine snow synthesis and removal [55.86191108738564]
This paper proposes a novel method to reduce the marine snow interference using deep learning techniques.
We first synthesize realistic marine snow samples by training a Generative Adversarial Network (GAN) model.
We then train a U-Net model to perform marine snow removal as an image to image translation task.
arXiv Detail & Related papers (2023-11-27T07:19:41Z) - Seafloor-Invariant Caustics Removal from Underwater Imagery [0.0]
Caustics are complex physical phenomena resulting from the projection of light rays being refracted by the wavy surface.
In this work, we propose a novel method for correcting the effects of caustics on shallow underwater imagery.
In particular, the developed method employs deep learning architectures in order to classify image pixels to "non-caustics" and "caustics"
arXiv Detail & Related papers (2022-12-20T11:11:02Z) - WaterNeRF: Neural Radiance Fields for Underwater Scenes [6.161668246821327]
We advance state-of-the-art in neural radiance fields (NeRFs) to enable physics-informed dense depth estimation and color correction.
Our proposed method, WaterNeRF, estimates parameters of a physics-based model for underwater image formation.
We can produce novel views of degraded as well as corrected underwater images, along with dense depth of the scene.
arXiv Detail & Related papers (2022-09-27T00:53:26Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Generating Physically-Consistent Satellite Imagery for Climate Visualizations [53.61991820941501]
We train a generative adversarial network to create synthetic satellite imagery of future flooding and reforestation events.
A pure deep learning-based model can generate flood visualizations but hallucinates floods at locations that were not susceptible to flooding.
We publish our code and dataset for segmentation guided image-to-image translation in Earth observation.
arXiv Detail & Related papers (2021-04-10T15:00:15Z) - Learning Depth With Very Sparse Supervision [57.911425589947314]
This paper explores the idea that perception gets coupled to 3D properties of the world via interaction with the environment.
We train a specialized global-local network architecture with what would be available to a robot interacting with the environment.
Experiments on several datasets show that, when ground truth is available even for just one of the image pixels, the proposed network can learn monocular dense depth estimation up to 22.5% more accurately than state-of-the-art approaches.
arXiv Detail & Related papers (2020-03-02T10:44:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.