DeepSeeColor: Realtime Adaptive Color Correction for Autonomous
Underwater Vehicles via Deep Learning Methods
- URL: http://arxiv.org/abs/2303.04025v1
- Date: Tue, 7 Mar 2023 16:38:50 GMT
- Title: DeepSeeColor: Realtime Adaptive Color Correction for Autonomous
Underwater Vehicles via Deep Learning Methods
- Authors: Stewart Jamieson (1 and 2), Jonathan P. How (2), Yogesh Girdhar (3)
((1) MIT-WHOI Joint Program, (2) Department of Aeronautics and Astronautics,
Massachusetts Institute of Technology, (3) Applied Ocean Physics and
Engineering Department, Woods Hole Oceanographic Institution)
- Abstract summary: DeepSeeColor is a novel algorithm that combines a state-of-the-art underwater image formation model with the efficiency of deep learning frameworks.
We show that DeepSeeColor offers comparable performance to the popular "Sea-Thru" algorithm while being able to rapidly process images at up to 60Hz.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Successful applications of complex vision-based behaviours underwater have
lagged behind progress in terrestrial and aerial domains. This is largely due
to the degraded image quality resulting from the physical phenomena involved in
underwater image formation. Spectrally-selective light attenuation drains some
colors from underwater images while backscattering adds others, making it
challenging to perform vision-based tasks underwater. State-of-the-art methods
for underwater color correction optimize the parameters of image formation
models to restore the full spectrum of color to underwater imagery. However,
these methods have high computational complexity that is unfavourable for
realtime use by autonomous underwater vehicles (AUVs), as a result of having
been primarily designed for offline color correction. Here, we present
DeepSeeColor, a novel algorithm that combines a state-of-the-art underwater
image formation model with the computational efficiency of deep learning
frameworks. In our experiments, we show that DeepSeeColor offers comparable
performance to the popular "Sea-Thru" algorithm (Akkaynak & Treibitz, 2019)
while being able to rapidly process images at up to 60Hz, thus making it
suitable for use onboard AUVs as a preprocessing step to enable more robust
vision-based behaviours.
Related papers
- Underwater Image Enhancement with Cascaded Contrastive Learning [51.897854142606725]
Underwater image enhancement (UIE) is a highly challenging task due to the complexity of underwater environment and the diversity of underwater image degradation.
Most of the existing deep learning-based UIE methods follow a single-stage network which cannot effectively address the diverse degradations simultaneously.
We propose a two-stage deep learning framework and taking advantage of cascaded contrastive learning to guide the network training of each stage.
arXiv Detail & Related papers (2024-11-16T03:16:44Z) - Underwater Image Enhancement via Dehazing and Color Restoration [17.263563715287045]
Existing underwater image enhancement methods treat the haze and color cast as a unified degradation process.
We propose a Vision Transformer (ViT)-based network (referred to as WaterFormer) to improve the underwater image quality.
arXiv Detail & Related papers (2024-09-15T15:58:20Z) - Enhancing Underwater Imaging with 4-D Light Fields: Dataset and Method [77.80712860663886]
4-D light fields (LFs) enhance underwater imaging plagued by light absorption, scattering, and other challenges.
We propose a progressive framework for underwater 4-D LF image enhancement and depth estimation.
We construct the first 4-D LF-based underwater image dataset for quantitative evaluation and supervised training of learning-based methods.
arXiv Detail & Related papers (2024-08-30T15:06:45Z) - Physics Informed and Data Driven Simulation of Underwater Images via
Residual Learning [5.095097384893417]
In general, underwater images suffer from color distortion and low contrast, because light is attenuated and backscattered as it propagates through water.
An existing simple degradation model (similar to atmospheric image "hazing" effects) is not sufficient to properly represent the underwater image degradation.
We propose a deep learning-based architecture to automatically simulate the underwater effects.
arXiv Detail & Related papers (2024-02-07T21:53:28Z) - An Efficient Detection and Control System for Underwater Docking using
Machine Learning and Realistic Simulation: A Comprehensive Approach [5.039813366558306]
This work compares different deep-learning architectures to perform underwater docking detection and classification.
A Generative Adversarial Network (GAN) is used to do image-to-image translation, converting the Gazebo simulation image into an underwater-looking image.
Results show an improvement of 20% in the high turbidity scenarios regardless of the underwater currents.
arXiv Detail & Related papers (2023-11-02T18:10:20Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - Unpaired Overwater Image Defogging Using Prior Map Guided CycleGAN [60.257791714663725]
We propose a Prior map Guided CycleGAN (PG-CycleGAN) for defogging of images with overwater scenes.
The proposed method outperforms the state-of-the-art supervised, semi-supervised, and unsupervised defogging approaches.
arXiv Detail & Related papers (2022-12-23T03:00:28Z) - Underwater enhancement based on a self-learning strategy and attention
mechanism for high-intensity regions [0.0]
Images acquired during underwater activities suffer from environmental properties of the water, such as turbidity and light attenuation.
Recent works related to underwater image enhancement, and based on deep learning approaches, tackle the lack of paired datasets generating synthetic ground-truth.
We present a self-supervised learning methodology for underwater image enhancement based on deep learning that requires no paired datasets.
arXiv Detail & Related papers (2022-08-04T19:55:40Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Underwater Image Enhancement via Medium Transmission-Guided Multi-Color
Space Embedding [88.46682991985907]
We present an underwater image enhancement network via medium transmission-guided multi-color space embedding, called Ucolor.
Our network can effectively improve the visual quality of underwater images by exploiting multiple color spaces embedding.
arXiv Detail & Related papers (2021-04-27T07:35:30Z) - Domain Adaptive Adversarial Learning Based on Physics Model Feedback for
Underwater Image Enhancement [10.143025577499039]
We propose a new robust adversarial learning framework via physics model based feedback control and domain adaptation mechanism for enhancing underwater images.
A new method for simulating underwater-like training dataset from RGB-D data by underwater image formation model is proposed.
Final enhanced results on synthetic and real underwater images demonstrate the superiority of the proposed method.
arXiv Detail & Related papers (2020-02-20T07:50:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.