SUCRe: Leveraging Scene Structure for Underwater Color Restoration
- URL: http://arxiv.org/abs/2212.09129v3
- Date: Thu, 18 Jan 2024 10:52:34 GMT
- Title: SUCRe: Leveraging Scene Structure for Underwater Color Restoration
- Authors: Cl\'ementin Boittiaux, Ricard Marxer, Claire Dune, Aur\'elien
Arnaubec, Maxime Ferrera, Vincent Hugel
- Abstract summary: We introduce SUCRe, a novel method that exploits the scene's 3D structure for underwater color restoration.
We conduct extensive quantitative and qualitative analyses of our approach in a variety of scenarios ranging from natural light to deep-sea environments.
- Score: 1.9490160607392462
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Underwater images are altered by the physical characteristics of the medium
through which light rays pass before reaching the optical sensor. Scattering
and wavelength-dependent absorption significantly modify the captured colors
depending on the distance of observed elements to the image plane. In this
paper, we aim to recover an image of the scene as if the water had no effect on
light propagation. We introduce SUCRe, a novel method that exploits the scene's
3D structure for underwater color restoration. By following points in multiple
images and tracking their intensities at different distances to the sensor, we
constrain the optimization of the parameters in an underwater image formation
model and retrieve unattenuated pixel intensities. We conduct extensive
quantitative and qualitative analyses of our approach in a variety of scenarios
ranging from natural light to deep-sea environments using three underwater
datasets acquired from real-world scenarios and one synthetic dataset. We also
compare the performance of the proposed approach with that of a wide range of
existing state-of-the-art methods. The results demonstrate a consistent benefit
of exploiting multiple views across a spectrum of objective metrics. Our code
is publicly available at https://github.com/clementinboittiaux/sucre.
Related papers
- Enhancing Underwater Imaging with 4-D Light Fields: Dataset and Method [77.80712860663886]
4-D light fields (LFs) enhance underwater imaging plagued by light absorption, scattering, and other challenges.
We propose a progressive framework for underwater 4-D LF image enhancement and depth estimation.
We construct the first 4-D LF-based underwater image dataset for quantitative evaluation and supervised training of learning-based methods.
arXiv Detail & Related papers (2024-08-30T15:06:45Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - Beyond NeRF Underwater: Learning Neural Reflectance Fields for True
Color Correction of Marine Imagery [16.16700041031569]
Underwater imagery often exhibits distorted coloration as a result of light-water interactions.
We propose an algorithm to restore the true color (albedo) in underwater imagery by jointly learning the effects of the medium and neural scene representations.
arXiv Detail & Related papers (2023-04-06T21:29:34Z) - Seafloor-Invariant Caustics Removal from Underwater Imagery [0.0]
Caustics are complex physical phenomena resulting from the projection of light rays being refracted by the wavy surface.
In this work, we propose a novel method for correcting the effects of caustics on shallow underwater imagery.
In particular, the developed method employs deep learning architectures in order to classify image pixels to "non-caustics" and "caustics"
arXiv Detail & Related papers (2022-12-20T11:11:02Z) - WaterNeRF: Neural Radiance Fields for Underwater Scenes [6.161668246821327]
We advance state-of-the-art in neural radiance fields (NeRFs) to enable physics-informed dense depth estimation and color correction.
Our proposed method, WaterNeRF, estimates parameters of a physics-based model for underwater image formation.
We can produce novel views of degraded as well as corrected underwater images, along with dense depth of the scene.
arXiv Detail & Related papers (2022-09-27T00:53:26Z) - Beyond Visual Field of View: Perceiving 3D Environment with Echoes and
Vision [51.385731364529306]
This paper focuses on perceiving and navigating 3D environments using echoes and RGB image.
In particular, we perform depth estimation by fusing RGB image with echoes, received from multiple orientations.
We show that the echoes provide holistic and in-expensive information about the 3D structures complementing the RGB image.
arXiv Detail & Related papers (2022-07-03T22:31:47Z) - Wild ToFu: Improving Range and Quality of Indirect Time-of-Flight Depth
with RGB Fusion in Challenging Environments [56.306567220448684]
We propose a new learning based end-to-end depth prediction network which takes noisy raw I-ToF signals as well as an RGB image.
We show more than 40% RMSE improvement on the final depth map compared to the baseline approach.
arXiv Detail & Related papers (2021-12-07T15:04:14Z) - Robustly Removing Deep Sea Lighting Effects for Visual Mapping of
Abyssal Plains [3.566117940176302]
The majority of Earth's surface lies deep in the oceans, where no surface light reaches.
Visual mapping, including image matching and surface albedo estimation, severely suffers from the effects that co-moving light sources produce.
We present a practical approach to estimating and compensating these lighting effects on predominantly homogeneous, flat seafloor regions.
arXiv Detail & Related papers (2021-10-01T15:28:07Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Wavelength-based Attributed Deep Neural Network for Underwater Image
Restoration [9.378355457555319]
This paper shows that attributing the right receptive field size (context) based on the traversing range of the color channel may lead to a substantial performance gain.
As a second novelty, we have incorporated an attentive skip mechanism to adaptively refine the learned multi-contextual features.
The proposed framework, called Deep WaveNet, is optimized using the traditional pixel-wise and feature-based cost functions.
arXiv Detail & Related papers (2021-06-15T06:47:51Z) - Underwater Image Enhancement via Medium Transmission-Guided Multi-Color
Space Embedding [88.46682991985907]
We present an underwater image enhancement network via medium transmission-guided multi-color space embedding, called Ucolor.
Our network can effectively improve the visual quality of underwater images by exploiting multiple color spaces embedding.
arXiv Detail & Related papers (2021-04-27T07:35:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.