LAFFNet: A Lightweight Adaptive Feature Fusion Network for Underwater
Image Enhancement
- URL: http://arxiv.org/abs/2105.01299v2
- Date: Wed, 5 May 2021 02:16:23 GMT
- Title: LAFFNet: A Lightweight Adaptive Feature Fusion Network for Underwater
Image Enhancement
- Authors: Hao-Hsiang Yang and Kuan-Chih Huang and Wei-Ting Chen
- Abstract summary: We propose a lightweight adaptive feature fusion network (LAFFNet) for underwater image enhancement.
Our method reduces the number of parameters from 2.5M to 0.15M but outperforms state-of-the-art algorithms by extensive experiments.
- Score: 6.338178373376447
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Underwater image enhancement is an important low-level computer vision task
for autonomous underwater vehicles and remotely operated vehicles to explore
and understand the underwater environments. Recently, deep convolutional neural
networks (CNNs) have been successfully used in many computer vision problems,
and so does underwater image enhancement. There are many deep-learning-based
methods with impressive performance for underwater image enhancement, but their
memory and model parameter costs are hindrances in practical application. To
address this issue, we propose a lightweight adaptive feature fusion network
(LAFFNet). The model is the encoder-decoder model with multiple adaptive
feature fusion (AAF) modules. AAF subsumes multiple branches with different
kernel sizes to generate multi-scale feature maps. Furthermore, channel
attention is used to merge these feature maps adaptively. Our method reduces
the number of parameters from 2.5M to 0.15M (around 94% reduction) but
outperforms state-of-the-art algorithms by extensive experiments. Furthermore,
we demonstrate our LAFFNet effectively improves high-level vision tasks like
salience object detection and single image depth estimation.
Related papers
- LU2Net: A Lightweight Network for Real-time Underwater Image Enhancement [4.353142366661057]
Lightweight Underwater Unet (LU2Net) is a novel U-shape network designed specifically for real-time enhancement of underwater images.
LU2Net is capable of providing well-enhanced underwater images at a speed 8 times faster than the current state-of-the-art underwater image enhancement method.
arXiv Detail & Related papers (2024-06-21T08:33:13Z) - Multi-scale Unified Network for Image Classification [33.560003528712414]
CNNs face notable challenges in performance and computational efficiency when dealing with real-world, multi-scale image inputs.
We propose Multi-scale Unified Network (MUSN) consisting of multi-scales, a unified network, and scale-invariant constraint.
MUSN yields an accuracy increase up to 44.53% and diminishes FLOPs by 7.01-16.13% in multi-scale scenarios.
arXiv Detail & Related papers (2024-03-27T06:40:26Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - UWFormer: Underwater Image Enhancement via a Semi-Supervised Multi-Scale Transformer [26.15238399758745]
Underwater images often exhibit poor quality, distorted color balance and low contrast.
Current deep learning methods rely on Neural Convolutional Networks (CNNs) that lack the multi-scale enhancement.
We propose a Multi-scale Transformer-based Network for enhancing images at multiple frequencies via semi-supervised learning.
arXiv Detail & Related papers (2023-10-31T06:19:09Z) - HAT: Hybrid Attention Transformer for Image Restoration [61.74223315807691]
Transformer-based methods have shown impressive performance in image restoration tasks, such as image super-resolution and denoising.
We propose a new Hybrid Attention Transformer (HAT) to activate more input pixels for better restoration.
Our HAT achieves state-of-the-art performance both quantitatively and qualitatively.
arXiv Detail & Related papers (2023-09-11T05:17:55Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - Semantic-aware Texture-Structure Feature Collaboration for Underwater
Image Enhancement [58.075720488942125]
Underwater image enhancement has become an attractive topic as a significant technology in marine engineering and aquatic robotics.
We develop an efficient and compact enhancement network in collaboration with a high-level semantic-aware pretrained model.
We also apply the proposed algorithm to the underwater salient object detection task to reveal the favorable semantic-aware ability for high-level vision tasks.
arXiv Detail & Related papers (2022-11-19T07:50:34Z) - EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for
Mobile Vision Applications [68.35683849098105]
We introduce split depth-wise transpose attention (SDTA) encoder that splits input tensors into multiple channel groups.
Our EdgeNeXt model with 1.3M parameters achieves 71.2% top-1 accuracy on ImageNet-1K.
Our EdgeNeXt model with 5.6M parameters achieves 79.4% top-1 accuracy on ImageNet-1K.
arXiv Detail & Related papers (2022-06-21T17:59:56Z) - Wavelength-based Attributed Deep Neural Network for Underwater Image
Restoration [9.378355457555319]
This paper shows that attributing the right receptive field size (context) based on the traversing range of the color channel may lead to a substantial performance gain.
As a second novelty, we have incorporated an attentive skip mechanism to adaptively refine the learned multi-contextual features.
The proposed framework, called Deep WaveNet, is optimized using the traditional pixel-wise and feature-based cost functions.
arXiv Detail & Related papers (2021-06-15T06:47:51Z) - Lightweight Single-Image Super-Resolution Network with Attentive
Auxiliary Feature Learning [73.75457731689858]
We develop a computation efficient yet accurate network based on the proposed attentive auxiliary features (A$2$F) for SISR.
Experimental results on large-scale dataset demonstrate the effectiveness of the proposed model against the state-of-the-art (SOTA) SR methods.
arXiv Detail & Related papers (2020-11-13T06:01:46Z) - MLFcGAN: Multi-level Feature Fusion based Conditional GAN for Underwater
Image Color Correction [35.16835830904171]
We propose a deep multi-scale feature fusion net based on the conditional generative adversarial network (GAN) for underwater image color correction.
In our network, multi-scale features are extracted first, followed by augmenting local features on each scale with global features.
This design was verified to facilitate more effective and faster network learning, resulting in better performance in both color correction and detail preservation.
arXiv Detail & Related papers (2020-02-13T04:15:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.