Simultaneous Enhancement and Super-Resolution of Underwater Imagery for
Improved Visual Perception
- URL: http://arxiv.org/abs/2002.01155v1
- Date: Tue, 4 Feb 2020 07:07:08 GMT
- Title: Simultaneous Enhancement and Super-Resolution of Underwater Imagery for
Improved Visual Perception
- Authors: Md Jahidul Islam, Peigen Luo and Junaed Sattar
- Abstract summary: We introduce and tackle the simultaneous enhancement and super-resolution (SESR) problem for underwater robot vision.
We present Deep SESR, a residual-in-residual network-based generative model that can learn to restore perceptual image qualities at 2x, 3x, or 4x higher spatial resolution.
- Score: 17.403133838762447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce and tackle the simultaneous enhancement and
super-resolution (SESR) problem for underwater robot vision and provide an
efficient solution for near real-time applications. We present Deep SESR, a
residual-in-residual network-based generative model that can learn to restore
perceptual image qualities at 2x, 3x, or 4x higher spatial resolution. We
supervise its training by formulating a multi-modal objective function that
addresses the chrominance-specific underwater color degradation, lack of image
sharpness, and loss in high-level feature representation. It is also supervised
to learn salient foreground regions in the image, which in turn guides the
network to learn global contrast enhancement. We design an end-to-end training
pipeline to jointly learn the saliency prediction and SESR on a shared
hierarchical feature space for fast inference. Moreover, we present UFO-120,
the first dataset to facilitate large-scale SESR learning; it contains over
1500 training samples and a benchmark test set of 120 samples. By thorough
experimental evaluation on the UFO-120 and other standard datasets, we
demonstrate that Deep SESR outperforms the existing solutions for underwater
image enhancement and super-resolution. We also validate its generalization
performance on several test cases that include underwater images with diverse
spectral and spatial degradation levels, and also terrestrial images with
unseen natural objects. Lastly, we analyze its computational feasibility for
single-board deployments and demonstrate its operational benefits for
visually-guided underwater robots. The model and dataset information will be
available at: https://github.com/xahidbuffon/Deep-SESR.
Related papers
- Advanced Underwater Image Quality Enhancement via Hybrid Super-Resolution Convolutional Neural Networks and Multi-Scale Retinex-Based Defogging Techniques [0.0]
The research conducts extensive experiments on real-world underwater datasets to further illustrate the efficacy of the suggested approach.
In real-time underwater applications like marine exploration, underwater robotics, and autonomous underwater vehicles, the combination of deep learning and conventional image processing techniques offers a computationally efficient framework with superior results.
arXiv Detail & Related papers (2024-10-18T08:40:26Z) - Rethinking Image Super-Resolution from Training Data Perspectives [54.28824316574355]
We investigate the understudied effect of the training data used for image super-resolution (SR)
With this, we propose an automated image evaluation pipeline.
We find that datasets with (i) low compression artifacts, (ii) high within-image diversity as judged by the number of different objects, and (iii) a large number of images from ImageNet or PASS all positively affect SR performance.
arXiv Detail & Related papers (2024-09-01T16:25:04Z) - UIE-UnFold: Deep Unfolding Network with Color Priors and Vision Transformer for Underwater Image Enhancement [27.535028176427623]
Underwater image enhancement (UIE) plays a crucial role in various marine applications.
Current learning-based approaches frequently lack explicit prior knowledge about the physical processes involved in underwater image formation.
This paper proposes a novel deep unfolding network (DUN) for UIE that integrates color priors and inter-stage feature incorporation.
arXiv Detail & Related papers (2024-08-20T08:48:33Z) - MuLA-GAN: Multi-Level Attention GAN for Enhanced Underwater Visibility [1.9272863690919875]
We introduce MuLA-GAN, a novel approach that leverages the synergistic power of Geneversarative Adrial Networks (GANs) and Multi-Level Attention mechanisms for comprehensive underwater image enhancement.
Our model excels in capturing and preserving intricate details in underwater imagery, essential for various applications.
This work not only addresses a significant research gap in underwater image enhancement but also underscores the pivotal role of Multi-Level Attention in enhancing GANs.
arXiv Detail & Related papers (2023-12-25T07:33:47Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - An Efficient Detection and Control System for Underwater Docking using
Machine Learning and Realistic Simulation: A Comprehensive Approach [5.039813366558306]
This work compares different deep-learning architectures to perform underwater docking detection and classification.
A Generative Adversarial Network (GAN) is used to do image-to-image translation, converting the Gazebo simulation image into an underwater-looking image.
Results show an improvement of 20% in the high turbidity scenarios regardless of the underwater currents.
arXiv Detail & Related papers (2023-11-02T18:10:20Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - Adaptive Uncertainty Distribution in Deep Learning for Unsupervised
Underwater Image Enhancement [1.9249287163937976]
One of the main challenges in deep learning-based underwater image enhancement is the limited availability of high-quality training data.
We propose a novel unsupervised underwater image enhancement framework that employs a conditional variational autoencoder (cVAE) to train a deep learning model.
We show that our proposed framework yields competitive performance compared to other state-of-the-art approaches in quantitative as well as qualitative metrics.
arXiv Detail & Related papers (2022-12-18T01:07:20Z) - Real-World Image Super-Resolution by Exclusionary Dual-Learning [98.36096041099906]
Real-world image super-resolution is a practical image restoration problem that aims to obtain high-quality images from in-the-wild input.
Deep learning-based methods have achieved promising restoration quality on real-world image super-resolution datasets.
We propose Real-World image Super-Resolution by Exclusionary Dual-Learning (RWSR-EDL) to address the feature diversity in perceptual- and L1-based cooperative learning.
arXiv Detail & Related papers (2022-06-06T13:28:15Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z) - Best-Buddy GANs for Highly Detailed Image Super-Resolution [71.13466303340192]
We consider the single image super-resolution (SISR) problem, where a high-resolution (HR) image is generated based on a low-resolution (LR) input.
Most methods along this line rely on a predefined single-LR-single-HR mapping, which is not flexible enough for the SISR task.
We propose best-buddy GANs (Beby-GAN) for rich-detail SISR. Relaxing the immutable one-to-one constraint, we allow the estimated patches to dynamically seek the best supervision.
arXiv Detail & Related papers (2021-03-29T02:58:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.