UWFormer: Underwater Image Enhancement via a Semi-Supervised Multi-Scale Transformer
- URL: http://arxiv.org/abs/2310.20210v4
- Date: Wed, 24 Apr 2024 12:26:46 GMT
- Title: UWFormer: Underwater Image Enhancement via a Semi-Supervised Multi-Scale Transformer
- Authors: Weiwen Chen, Yingtie Lei, Shenghong Luo, Ziyang Zhou, Mingxian Li, Chi-Man Pun,
- Abstract summary: Underwater images often exhibit poor quality, distorted color balance and low contrast.
Current deep learning methods rely on Neural Convolutional Networks (CNNs) that lack the multi-scale enhancement.
We propose a Multi-scale Transformer-based Network for enhancing images at multiple frequencies via semi-supervised learning.
- Score: 26.15238399758745
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Underwater images often exhibit poor quality, distorted color balance and low contrast due to the complex and intricate interplay of light, water, and objects. Despite the significant contributions of previous underwater enhancement techniques, there exist several problems that demand further improvement: (i) The current deep learning methods rely on Convolutional Neural Networks (CNNs) that lack the multi-scale enhancement, and global perception field is also limited. (ii) The scarcity of paired real-world underwater datasets poses a significant challenge, and the utilization of synthetic image pairs could lead to overfitting. To address the aforementioned problems, this paper introduces a Multi-scale Transformer-based Network called UWFormer for enhancing images at multiple frequencies via semi-supervised learning, in which we propose a Nonlinear Frequency-aware Attention mechanism and a Multi-Scale Fusion Feed-forward Network for low-frequency enhancement. Besides, we introduce a special underwater semi-supervised training strategy, where we propose a Subaqueous Perceptual Loss function to generate reliable pseudo labels. Experiments using full-reference and non-reference underwater benchmarks demonstrate that our method outperforms state-of-the-art methods in terms of both quantity and visual quality.
Related papers
- Advanced Underwater Image Quality Enhancement via Hybrid Super-Resolution Convolutional Neural Networks and Multi-Scale Retinex-Based Defogging Techniques [0.0]
The research conducts extensive experiments on real-world underwater datasets to further illustrate the efficacy of the suggested approach.
In real-time underwater applications like marine exploration, underwater robotics, and autonomous underwater vehicles, the combination of deep learning and conventional image processing techniques offers a computationally efficient framework with superior results.
arXiv Detail & Related papers (2024-10-18T08:40:26Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - RAUNE-Net: A Residual and Attention-Driven Underwater Image Enhancement
Method [2.6645441842326756]
Underwater image enhancement (UIE) poses challenges due to distinctive properties of the underwater environment.
In this paper, we propose a more reliable and reasonable UIE network called RAUNE-Net.
Our method obtains promising objective performance and consistent visual results across various real-world underwater images.
arXiv Detail & Related papers (2023-11-01T03:00:07Z) - Dual Adversarial Resilience for Collaborating Robust Underwater Image
Enhancement and Perception [54.672052775549]
In this work, we introduce a collaborative adversarial resilience network, dubbed CARNet, for underwater image enhancement and subsequent detection tasks.
We propose a synchronized attack training strategy with both visual-driven and perception-driven attacks enabling the network to discern and remove various types of attacks.
Experiments demonstrate that the proposed method outputs visually appealing enhancement images and perform averagely 6.71% higher detection mAP than state-of-the-art methods.
arXiv Detail & Related papers (2023-09-03T06:52:05Z) - Feature Attention Network (FA-Net): A Deep-Learning Based Approach for
Underwater Single Image Enhancement [0.8694819854201992]
We propose a deep learning and feature-attention-based end-to-end network (FA-Net) to solve this problem.
In particular, we propose a Residual Feature Attention Block (RFAB) containing the channel attention, pixel attention, and residual learning mechanism with long and short skip connections.
RFAB allows the network to focus on learning high-frequency information while skipping low-frequency information on multi-hop connections.
arXiv Detail & Related papers (2023-08-30T08:56:36Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - Semantic-aware Texture-Structure Feature Collaboration for Underwater
Image Enhancement [58.075720488942125]
Underwater image enhancement has become an attractive topic as a significant technology in marine engineering and aquatic robotics.
We develop an efficient and compact enhancement network in collaboration with a high-level semantic-aware pretrained model.
We also apply the proposed algorithm to the underwater salient object detection task to reveal the favorable semantic-aware ability for high-level vision tasks.
arXiv Detail & Related papers (2022-11-19T07:50:34Z) - Wavelength-based Attributed Deep Neural Network for Underwater Image
Restoration [9.378355457555319]
This paper shows that attributing the right receptive field size (context) based on the traversing range of the color channel may lead to a substantial performance gain.
As a second novelty, we have incorporated an attentive skip mechanism to adaptively refine the learned multi-contextual features.
The proposed framework, called Deep WaveNet, is optimized using the traditional pixel-wise and feature-based cost functions.
arXiv Detail & Related papers (2021-06-15T06:47:51Z) - Underwater Image Enhancement via Medium Transmission-Guided Multi-Color
Space Embedding [88.46682991985907]
We present an underwater image enhancement network via medium transmission-guided multi-color space embedding, called Ucolor.
Our network can effectively improve the visual quality of underwater images by exploiting multiple color spaces embedding.
arXiv Detail & Related papers (2021-04-27T07:35:30Z) - Underwater Image Enhancement via Learning Water Type Desensitized
Representations [29.05252230912826]
We present a novel underwater image enhancement (UIE) framework termed SCNet to address the above issues.
SCNet is based on normalization schemes across both spatial and channel dimensions with the key idea of learning water type desensitized features.
Experimental results on two real-world UIE datasets show that the proposed approach can successfully enhance images with diverse water types.
arXiv Detail & Related papers (2021-02-01T07:34:54Z) - Progressive Training of Multi-level Wavelet Residual Networks for Image
Denoising [80.10533234415237]
This paper presents a multi-level wavelet residual network (MWRN) architecture as well as a progressive training scheme to improve image denoising performance.
Experiments on both synthetic and real-world noisy images show that our PT-MWRN performs favorably against the state-of-the-art denoising methods.
arXiv Detail & Related papers (2020-10-23T14:14:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.