Deep Sea Robotic Imaging Simulator
- URL: http://arxiv.org/abs/2006.15398v3
- Date: Tue, 23 Feb 2021 12:07:18 GMT
- Title: Deep Sea Robotic Imaging Simulator
- Authors: Yifan Song, David Nakath, Mengkun She, Furkan Elibol and Kevin K\"oser
- Abstract summary: The largest portion of the ocean - the deep sea - still remains mostly unexplored.
Deep sea images are very different from the images taken in shallow waters and this area did not get much attention from the community.
This paper presents a physical model-based image simulation solution, which uses an in-air texture and depth information as inputs.
- Score: 6.2122699483618
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays underwater vision systems are being widely applied in ocean
research. However, the largest portion of the ocean - the deep sea - still
remains mostly unexplored. Only relatively few image sets have been taken from
the deep sea due to the physical limitations caused by technical challenges and
enormous costs. Deep sea images are very different from the images taken in
shallow waters and this area did not get much attention from the community. The
shortage of deep sea images and the corresponding ground truth data for
evaluation and training is becoming a bottleneck for the development of
underwater computer vision methods. Thus, this paper presents a physical
model-based image simulation solution, which uses an in-air texture and depth
information as inputs, to generate underwater image sequences taken by robots
in deep ocean scenarios. Different from shallow water conditions, artificial
illumination plays a vital role in deep sea image formation as it strongly
affects the scene appearance. Our radiometric image formation model considers
both attenuation and scattering effects with co-moving spotlights in the dark.
By detailed analysis and evaluation of the underwater image formation model, we
propose a 3D lookup table structure in combination with a novel rendering
strategy to improve simulation performance. This enables us to integrate an
interactive deep sea robotic vision simulation in the Unmanned Underwater
Vehicles simulator. To inspire further deep sea vision research by the
community, we will release the source code of our deep sea image converter to
the public.
Related papers
- WaterMono: Teacher-Guided Anomaly Masking and Enhancement Boosting for Robust Underwater Self-Supervised Monocular Depth Estimation [4.909989222186828]
We propose WaterMono, a novel framework for depth estimation and image enhancement.
It incorporates the following key measures: (1) We present a Teacher-Guided Anomaly Mask to identify dynamic regions within the images; (2) We employ depth information combined with the Underwater Image Formation Model to generate enhanced images, which in turn contribute to the depth estimation task; and (3) We utilize a rotated distillation strategy to enhance the model's rotational robustness.
arXiv Detail & Related papers (2024-06-19T08:49:45Z) - Physics-Inspired Synthesized Underwater Image Dataset [9.959844922120528]
PHISWID is a dataset tailored for enhancing underwater image processing through physics-inspired image synthesis.
Our results reveal that even a basic U-Net architecture, when trained with PHISWID, substantially outperforms existing methods in underwater image enhancement.
We intend to release PHISWID publicly, contributing a significant resource to the advancement of underwater imaging technology.
arXiv Detail & Related papers (2024-04-05T10:23:10Z) - Atlantis: Enabling Underwater Depth Estimation with Stable Diffusion [30.122666238416716]
We propose a novel pipeline for generating underwater images using accurate terrestrial depth data.
This approach facilitates the training of supervised models for underwater depth estimation.
We introduce a unique Depth2Underwater ControlNet, trained on specially prepared Underwater, Depth, Text data triplets.
arXiv Detail & Related papers (2023-12-19T08:56:33Z) - A deep learning approach for marine snow synthesis and removal [55.86191108738564]
This paper proposes a novel method to reduce the marine snow interference using deep learning techniques.
We first synthesize realistic marine snow samples by training a Generative Adversarial Network (GAN) model.
We then train a U-Net model to perform marine snow removal as an image to image translation task.
arXiv Detail & Related papers (2023-11-27T07:19:41Z) - Ghost on the Shell: An Expressive Representation of General 3D Shapes [97.76840585617907]
Meshes are appealing since they enable fast physics-based rendering with realistic material and lighting.
Recent work on reconstructing and statistically modeling 3D shapes has critiqued meshes as being topologically inflexible.
We parameterize open surfaces by defining a manifold signed distance field on watertight surfaces.
G-Shell achieves state-of-the-art performance on non-watertight mesh reconstruction and generation tasks.
arXiv Detail & Related papers (2023-10-23T17:59:52Z) - Seafloor-Invariant Caustics Removal from Underwater Imagery [0.0]
Caustics are complex physical phenomena resulting from the projection of light rays being refracted by the wavy surface.
In this work, we propose a novel method for correcting the effects of caustics on shallow underwater imagery.
In particular, the developed method employs deep learning architectures in order to classify image pixels to "non-caustics" and "caustics"
arXiv Detail & Related papers (2022-12-20T11:11:02Z) - WaterNeRF: Neural Radiance Fields for Underwater Scenes [6.161668246821327]
We advance state-of-the-art in neural radiance fields (NeRFs) to enable physics-informed dense depth estimation and color correction.
Our proposed method, WaterNeRF, estimates parameters of a physics-based model for underwater image formation.
We can produce novel views of degraded as well as corrected underwater images, along with dense depth of the scene.
arXiv Detail & Related papers (2022-09-27T00:53:26Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Physically-Consistent Generative Adversarial Networks for Coastal Flood
Visualization [60.690929022840685]
We propose the first deep learning pipeline to ensure physical-consistency in synthetic visual satellite imagery.
By evaluating the imagery relative to physics-based flood maps, we find that our proposed framework outperforms baseline models in both physical-consistency and photorealism.
We publish a dataset of over 25k labelled image-pairs to study image-to-image translation in Earth observation.
arXiv Detail & Related papers (2021-04-10T15:00:15Z) - Physics-informed GANs for Coastal Flood Visualization [65.54626149826066]
We create a deep learning pipeline that generates visual satellite images of current and future coastal flooding.
By evaluating the imagery relative to physics-based flood maps, we find that our proposed framework outperforms baseline models in both physical-consistency and photorealism.
While this work focused on the visualization of coastal floods, we envision the creation of a global visualization of how climate change will shape our earth.
arXiv Detail & Related papers (2020-10-16T02:15:34Z) - Learning Depth With Very Sparse Supervision [57.911425589947314]
This paper explores the idea that perception gets coupled to 3D properties of the world via interaction with the environment.
We train a specialized global-local network architecture with what would be available to a robot interacting with the environment.
Experiments on several datasets show that, when ground truth is available even for just one of the image pixels, the proposed network can learn monocular dense depth estimation up to 22.5% more accurately than state-of-the-art approaches.
arXiv Detail & Related papers (2020-03-02T10:44:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.