Seafloor-Invariant Caustics Removal from Underwater Imagery
- URL: http://arxiv.org/abs/2212.10167v1
- Date: Tue, 20 Dec 2022 11:11:02 GMT
- Title: Seafloor-Invariant Caustics Removal from Underwater Imagery
- Authors: Panagiotis Agrafiotis, Konstantinos Karantzalos, Andreas Georgopoulos
- Abstract summary: Caustics are complex physical phenomena resulting from the projection of light rays being refracted by the wavy surface.
In this work, we propose a novel method for correcting the effects of caustics on shallow underwater imagery.
In particular, the developed method employs deep learning architectures in order to classify image pixels to "non-caustics" and "caustics"
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Mapping the seafloor with underwater imaging cameras is of significant
importance for various applications including marine engineering, geology,
geomorphology, archaeology and biology. For shallow waters, among the
underwater imaging challenges, caustics i.e., the complex physical phenomena
resulting from the projection of light rays being refracted by the wavy
surface, is likely the most crucial one. Caustics is the main factor during
underwater imaging campaigns that massively degrade image quality and affect
severely any 2D mosaicking or 3D reconstruction of the seabed. In this work, we
propose a novel method for correcting the radiometric effects of caustics on
shallow underwater imagery. Contrary to the state-of-the-art, the developed
method can handle seabed and riverbed of any anaglyph, correcting the images
using real pixel information, thus, improving image matching and 3D
reconstruction processes. In particular, the developed method employs deep
learning architectures in order to classify image pixels to "non-caustics" and
"caustics". Then, exploits the 3D geometry of the scene to achieve a pixel-wise
correction, by transferring appropriate color values between the overlapping
underwater images. Moreover, to fill the current gap, we have collected,
annotated and structured a real-world caustic dataset, namely R-CAUSTIC, which
is openly available. Overall, based on the experimental results and validation
the developed methodology is quite promising in both detecting caustics and
reconstructing their intensity.
Related papers
- UW-SDF: Exploiting Hybrid Geometric Priors for Neural SDF Reconstruction from Underwater Multi-view Monocular Images [63.32490897641344]
We propose a framework for reconstructing target objects from multi-view underwater images based on neural SDF.
We introduce hybrid geometric priors to optimize the reconstruction process, markedly enhancing the quality and efficiency of neural SDF reconstruction.
arXiv Detail & Related papers (2024-10-10T16:33:56Z) - RecGS: Removing Water Caustic with Recurrent Gaussian Splatting [13.87415686123919]
Water caustics are commonly observed in seafloor imaging data from shallow-water areas.
Traditional methods that remove caustic patterns from images often rely on 2D filtering or pre-training on an annotated dataset.
We present a novel method Recurrent Gaussian Splatting (RecGS), which takes advantage of today's photorealistic 3D reconstruction technology.
arXiv Detail & Related papers (2024-07-14T20:24:44Z) - Physics-Inspired Synthesized Underwater Image Dataset [9.959844922120528]
PHISWID is a dataset tailored for enhancing underwater image processing through physics-inspired image synthesis.
Our results reveal that even a basic U-Net architecture, when trained with PHISWID, substantially outperforms existing methods in underwater image enhancement.
We intend to release PHISWID publicly, contributing a significant resource to the advancement of underwater imaging technology.
arXiv Detail & Related papers (2024-04-05T10:23:10Z) - Physics Informed and Data Driven Simulation of Underwater Images via
Residual Learning [5.095097384893417]
In general, underwater images suffer from color distortion and low contrast, because light is attenuated and backscattered as it propagates through water.
An existing simple degradation model (similar to atmospheric image "hazing" effects) is not sufficient to properly represent the underwater image degradation.
We propose a deep learning-based architecture to automatically simulate the underwater effects.
arXiv Detail & Related papers (2024-02-07T21:53:28Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - Unpaired Overwater Image Defogging Using Prior Map Guided CycleGAN [60.257791714663725]
We propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes.
The proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
arXiv Detail & Related papers (2022-12-23T03:00:28Z) - WaterNeRF: Neural Radiance Fields for Underwater Scenes [6.161668246821327]
We advance state-of-the-art in neural radiance fields (NeRFs) to enable physics-informed dense depth estimation and color correction.
Our proposed method, WaterNeRF, estimates parameters of a physics-based model for underwater image formation.
We can produce novel views of degraded as well as corrected underwater images, along with dense depth of the scene.
arXiv Detail & Related papers (2022-09-27T00:53:26Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Generating Physically-Consistent Satellite Imagery for Climate Visualizations [53.61991820941501]
We train a generative adversarial network to create synthetic satellite imagery of future flooding and reforestation events.
A pure deep learning-based model can generate flood visualizations but hallucinates floods at locations that were not susceptible to flooding.
We publish our code and dataset for segmentation guided image-to-image translation in Earth observation.
arXiv Detail & Related papers (2021-04-10T15:00:15Z) - Learning to Restore a Single Face Image Degraded by Atmospheric
Turbulence using CNNs [93.72048616001064]
Images captured under such condition suffer from a combination of geometric deformation and space varying blur.
We present a deep learning-based solution to the problem of restoring a turbulence-degraded face image.
arXiv Detail & Related papers (2020-07-16T15:25:08Z) - Deep Sea Robotic Imaging Simulator [6.2122699483618]
The largest portion of the ocean - the deep sea - still remains mostly unexplored.
Deep sea images are very different from the images taken in shallow waters and this area did not get much attention from the community.
This paper presents a physical model-based image simulation solution, which uses an in-air texture and depth information as inputs.
arXiv Detail & Related papers (2020-06-27T16:18:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.