MLFcGAN: Multi-level Feature Fusion based Conditional GAN for Underwater
Image Color Correction
- URL: http://arxiv.org/abs/2002.05333v1
- Date: Thu, 13 Feb 2020 04:15:10 GMT
- Title: MLFcGAN: Multi-level Feature Fusion based Conditional GAN for Underwater
Image Color Correction
- Authors: Xiaodong Liu, Zhi Gao, and Ben M. Chen
- Abstract summary: We propose a deep multi-scale feature fusion net based on the conditional generative adversarial network (GAN) for underwater image color correction.
In our network, multi-scale features are extracted first, followed by augmenting local features on each scale with global features.
This design was verified to facilitate more effective and faster network learning, resulting in better performance in both color correction and detail preservation.
- Score: 35.16835830904171
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Color correction for underwater images has received increasing interests, due
to its critical role in facilitating available mature vision algorithms for
underwater scenarios. Inspired by the stunning success of deep convolutional
neural networks (DCNNs) techniques in many vision tasks, especially the
strength in extracting features in multiple scales, we propose a deep
multi-scale feature fusion net based on the conditional generative adversarial
network (GAN) for underwater image color correction. In our network,
multi-scale features are extracted first, followed by augmenting local features
on each scale with global features. This design was verified to facilitate more
effective and faster network learning, resulting in better performance in both
color correction and detail preservation. We conducted extensive experiments
and compared with the state-of-the-art approaches quantitatively and
qualitatively, showing that our method achieves significant improvements.
Related papers
- UIE-UnFold: Deep Unfolding Network with Color Priors and Vision Transformer for Underwater Image Enhancement [27.535028176427623]
Underwater image enhancement (UIE) plays a crucial role in various marine applications.
Current learning-based approaches frequently lack explicit prior knowledge about the physical processes involved in underwater image formation.
This paper proposes a novel deep unfolding network (DUN) for UIE that integrates color priors and inter-stage feature incorporation.
arXiv Detail & Related papers (2024-08-20T08:48:33Z) - UWFormer: Underwater Image Enhancement via a Semi-Supervised Multi-Scale Transformer [26.15238399758745]
Underwater images often exhibit poor quality, distorted color balance and low contrast.
Current deep learning methods rely on Neural Convolutional Networks (CNNs) that lack the multi-scale enhancement.
We propose a Multi-scale Transformer-based Network for enhancing images at multiple frequencies via semi-supervised learning.
arXiv Detail & Related papers (2023-10-31T06:19:09Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - Semantic-aware Texture-Structure Feature Collaboration for Underwater
Image Enhancement [58.075720488942125]
Underwater image enhancement has become an attractive topic as a significant technology in marine engineering and aquatic robotics.
We develop an efficient and compact enhancement network in collaboration with a high-level semantic-aware pretrained model.
We also apply the proposed algorithm to the underwater salient object detection task to reveal the favorable semantic-aware ability for high-level vision tasks.
arXiv Detail & Related papers (2022-11-19T07:50:34Z) - Wavelength-based Attributed Deep Neural Network for Underwater Image
Restoration [9.378355457555319]
This paper shows that attributing the right receptive field size (context) based on the traversing range of the color channel may lead to a substantial performance gain.
As a second novelty, we have incorporated an attentive skip mechanism to adaptively refine the learned multi-contextual features.
The proposed framework, called Deep WaveNet, is optimized using the traditional pixel-wise and feature-based cost functions.
arXiv Detail & Related papers (2021-06-15T06:47:51Z) - Underwater Image Enhancement via Medium Transmission-Guided Multi-Color
Space Embedding [88.46682991985907]
We present an underwater image enhancement network via medium transmission-guided multi-color space embedding, called Ucolor.
Our network can effectively improve the visual quality of underwater images by exploiting multiple color spaces embedding.
arXiv Detail & Related papers (2021-04-27T07:35:30Z) - Interpretable Detail-Fidelity Attention Network for Single Image
Super-Resolution [89.1947690981471]
We propose a purposeful and interpretable detail-fidelity attention network to progressively process smoothes and details in divide-and-conquer manner.
Particularly, we propose a Hessian filtering for interpretable feature representation which is high-profile for detail inference.
Experiments demonstrate that the proposed methods achieve superior performances over the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-28T08:31:23Z) - Single Image Deraining via Scale-space Invariant Attention Neural
Network [58.5284246878277]
We tackle the notion of scale that deals with visual changes in appearance of rain steaks with respect to the camera.
We propose to represent the multi-scale correlation in convolutional feature domain, which is more compact and robust than that in pixel domain.
In this way, we summarize the most activated presence of feature maps as the salient features.
arXiv Detail & Related papers (2020-06-09T04:59:26Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.