U-shape Transformer for Underwater Image Enhancement
- URL: http://arxiv.org/abs/2111.11843v2
- Date: Wed, 24 Nov 2021 04:49:34 GMT
- Title: U-shape Transformer for Underwater Image Enhancement
- Authors: Lintao Peng, Chunli Zhu, Liheng Bian
- Abstract summary: In this work, we constructed a large-scale underwater image dataset including 5004 image pairs.
We reported an U-shape Transformer network where the transformer model is for the first time introduced to the UIE task.
In order to further improve the contrast and saturation, a novel loss function combining RGB, LAB and LCH color spaces is designed.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The light absorption and scattering of underwater impurities lead to poor
underwater imaging quality. The existing data-driven based underwater image
enhancement (UIE) techniques suffer from the lack of a large-scale dataset
containing various underwater scenes and high-fidelity reference images.
Besides, the inconsistent attenuation in different color channels and space
areas is not fully considered for boosted enhancement. In this work, we
constructed a large-scale underwater image (LSUI) dataset including 5004 image
pairs, and reported an U-shape Transformer network where the transformer model
is for the first time introduced to the UIE task. The U-shape Transformer is
integrated with a channel-wise multi-scale feature fusion transformer (CMSFFT)
module and a spatial-wise global feature modeling transformer (SGFMT) module,
which reinforce the network's attention to the color channels and space areas
with more serious attenuation. Meanwhile, in order to further improve the
contrast and saturation, a novel loss function combining RGB, LAB and LCH color
spaces is designed following the human vision principle. The extensive
experiments on available datasets validate the state-of-the-art performance of
the reported technique with more than 2dB superiority.
Related papers
- A Hybrid Transformer-Mamba Network for Single Image Deraining [70.64069487982916]
Existing deraining Transformers employ self-attention mechanisms with fixed-range windows or along channel dimensions.
We introduce a novel dual-branch hybrid Transformer-Mamba network, denoted as TransMamba, aimed at effectively capturing long-range rain-related dependencies.
arXiv Detail & Related papers (2024-08-31T10:03:19Z) - Image-Conditional Diffusion Transformer for Underwater Image Enhancement [4.555168682310286]
We propose a novel UIE method based on image-conditional diffusion transformer (ICDT)
Our method takes the degraded underwater image as the conditional input and converts it into latent space where ICDT is applied.
Our largest model, ICDT-XL/2, outperforms all comparison methods, achieving state-of-the-art (SOTA) quality of image enhancement.
arXiv Detail & Related papers (2024-07-07T14:34:31Z) - FDCE-Net: Underwater Image Enhancement with Embedding Frequency and Dual Color Encoder [49.79611204954311]
Underwater images often suffer from various issues such as low brightness, color shift, blurred details, and noise due to absorption light and scattering caused by water and suspended particles.
Previous underwater image enhancement (UIE) methods have primarily focused on spatial domain enhancement, neglecting the frequency domain information inherent in the images.
arXiv Detail & Related papers (2024-04-27T15:16:34Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - UWFormer: Underwater Image Enhancement via a Semi-Supervised Multi-Scale Transformer [26.15238399758745]
Underwater images often exhibit poor quality, distorted color balance and low contrast.
Current deep learning methods rely on Neural Convolutional Networks (CNNs) that lack the multi-scale enhancement.
We propose a Multi-scale Transformer-based Network for enhancing images at multiple frequencies via semi-supervised learning.
arXiv Detail & Related papers (2023-10-31T06:19:09Z) - Transmission and Color-guided Network for Underwater Image Enhancement [8.894719412298397]
We propose an Adaptive Transmission and Dynamic Color guided network (named ATDCnet) for underwater image enhancement.
To exploit the knowledge of physics, we design an Adaptive Transmission-directed Module (ATM) to better guide the network.
To deal with the color deviation problem, we design a Dynamic Color-guided Module (DCM) to post-process the enhanced image color.
arXiv Detail & Related papers (2023-08-09T11:43:54Z) - Dual Aggregation Transformer for Image Super-Resolution [92.41781921611646]
We propose a novel Transformer model, Dual Aggregation Transformer, for image SR.
Our DAT aggregates features across spatial and channel dimensions, in the inter-block and intra-block dual manner.
Our experiments show that our DAT surpasses current methods.
arXiv Detail & Related papers (2023-08-07T07:39:39Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - A Wavelet-based Dual-stream Network for Underwater Image Enhancement [11.178274779143209]
We present a wavelet-based dual-stream network that addresses color cast and blurry details in underwater images.
We handle these artifacts separately by decomposing an input image into multiple frequency bands using discrete wavelet transform.
We validate the proposed method on both real-world and synthetic underwater datasets and show the effectiveness of our model in color correction and blur removal with low computational complexity.
arXiv Detail & Related papers (2022-02-17T16:57:25Z) - Wavelength-based Attributed Deep Neural Network for Underwater Image
Restoration [9.378355457555319]
This paper shows that attributing the right receptive field size (context) based on the traversing range of the color channel may lead to a substantial performance gain.
As a second novelty, we have incorporated an attentive skip mechanism to adaptively refine the learned multi-contextual features.
The proposed framework, called Deep WaveNet, is optimized using the traditional pixel-wise and feature-based cost functions.
arXiv Detail & Related papers (2021-06-15T06:47:51Z) - Underwater Image Enhancement via Medium Transmission-Guided Multi-Color
Space Embedding [88.46682991985907]
We present an underwater image enhancement network via medium transmission-guided multi-color space embedding, called Ucolor.
Our network can effectively improve the visual quality of underwater images by exploiting multiple color spaces embedding.
arXiv Detail & Related papers (2021-04-27T07:35:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.