Synergistic Multiscale Detail Refinement via Intrinsic Supervision for
Underwater Image Enhancement
- URL: http://arxiv.org/abs/2308.11932v4
- Date: Thu, 18 Jan 2024 13:57:05 GMT
- Title: Synergistic Multiscale Detail Refinement via Intrinsic Supervision for
Underwater Image Enhancement
- Authors: Dehuan Zhang, Jingchun Zhou, ChunLe Guo, Weishi Zhang, Chongyi Li
- Abstract summary: We present intrinsic supervision (SMDR-IS) for enhancing underwater scene details, which contain multi-stages.
The ASISF module can precisely control and guide feature transmission across multi-degradation stages.
Bifocal Intrinsic-Context Attention Module (BICA) efficiently exploits multi-scale scene information in images.
- Score: 39.208417033777415
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visually restoring underwater scenes primarily involves mitigating
interference from underwater media. Existing methods ignore the inherent
scale-related characteristics in underwater scenes. Therefore, we present the
synergistic multi-scale detail refinement via intrinsic supervision (SMDR-IS)
for enhancing underwater scene details, which contain multi-stages. The
low-degradation stage from the original images furnishes the original stage
with multi-scale details, achieved through feature propagation using the
Adaptive Selective Intrinsic Supervised Feature (ASISF) module. By using
intrinsic supervision, the ASISF module can precisely control and guide feature
transmission across multi-degradation stages, enhancing multi-scale detail
refinement and minimizing the interference from irrelevant information in the
low-degradation stage. In multi-degradation encoder-decoder framework of
SMDR-IS, we introduce the Bifocal Intrinsic-Context Attention Module (BICA).
Based on the intrinsic supervision principles, BICA efficiently exploits
multi-scale scene information in images. BICA directs higher-resolution spaces
by tapping into the insights of lower-resolution ones, underscoring the pivotal
role of spatial contextual relationships in underwater image restoration.
Throughout training, the inclusion of a multi-degradation loss function can
enhance the network, allowing it to adeptly extract information across diverse
scales. When benchmarked against state-of-the-art methods, SMDR-IS consistently
showcases superior performance. The code is publicly available at:
https://github.com/zhoujingchun03/SMDR-IS.
Related papers
- UniUIR: Considering Underwater Image Restoration as An All-in-One Learner [49.35128836844725]
We propose a Universal Underwater Image Restoration method, termed as UniUIR.
To decouple degradation-specific issues and explore the inter-correlations among various degradations in UIR task, we designed the Mamba Mixture-of-Experts module.
This module extracts degradation prior information in both spatial and frequency domains, and adaptively selects the most appropriate task-specific prompts.
arXiv Detail & Related papers (2025-01-22T16:10:42Z) - HUPE: Heuristic Underwater Perceptual Enhancement with Semantic Collaborative Learning [62.264673293638175]
Existing underwater image enhancement methods primarily focus on improving visual quality while overlooking practical implications.
We propose a invertible network for underwater perception enhancement, dubbed H, which enhances visual quality and demonstrates flexibility in handling other downstream tasks.
arXiv Detail & Related papers (2024-11-27T12:37:03Z) - Boosting Visual Recognition in Real-world Degradations via Unsupervised Feature Enhancement Module with Deep Channel Prior [22.323789227447755]
Fog, low-light, and motion blur degrade image quality and pose threats to the safety of autonomous driving.
This work proposes a novel Deep Channel Prior (DCP) for degraded visual recognition.
Based on this, a novel plug-and-play Unsupervised Feature Enhancement Module (UFEM) is proposed to achieve unsupervised feature correction.
arXiv Detail & Related papers (2024-04-02T07:16:56Z) - UWFormer: Underwater Image Enhancement via a Semi-Supervised Multi-Scale Transformer [26.15238399758745]
Underwater images often exhibit poor quality, distorted color balance and low contrast.
Current deep learning methods rely on Neural Convolutional Networks (CNNs) that lack the multi-scale enhancement.
We propose a Multi-scale Transformer-based Network for enhancing images at multiple frequencies via semi-supervised learning.
arXiv Detail & Related papers (2023-10-31T06:19:09Z) - Semantic-aware Texture-Structure Feature Collaboration for Underwater
Image Enhancement [58.075720488942125]
Underwater image enhancement has become an attractive topic as a significant technology in marine engineering and aquatic robotics.
We develop an efficient and compact enhancement network in collaboration with a high-level semantic-aware pretrained model.
We also apply the proposed algorithm to the underwater salient object detection task to reveal the favorable semantic-aware ability for high-level vision tasks.
arXiv Detail & Related papers (2022-11-19T07:50:34Z) - SGUIE-Net: Semantic Attention Guided Underwater Image Enhancement with
Multi-Scale Perception [18.87163028415309]
We propose a novel underwater image enhancement network, called SGUIE-Net.
We introduce semantic information as high-level guidance across different images that share common semantic regions.
This strategy helps to achieve robust and visually pleasant enhancements to different semantic objects.
arXiv Detail & Related papers (2022-01-08T14:03:24Z) - High-resolution Depth Maps Imaging via Attention-based Hierarchical
Multi-modal Fusion [84.24973877109181]
We propose a novel attention-based hierarchical multi-modal fusion network for guided DSR.
We show that our approach outperforms state-of-the-art methods in terms of reconstruction accuracy, running speed and memory efficiency.
arXiv Detail & Related papers (2021-04-04T03:28:33Z) - ADRN: Attention-based Deep Residual Network for Hyperspectral Image
Denoising [52.01041506447195]
We propose an attention-based deep residual network to learn a mapping from noisy HSI to the clean one.
Experimental results demonstrate that our proposed ADRN scheme outperforms the state-of-the-art methods both in quantitative and visual evaluations.
arXiv Detail & Related papers (2020-03-04T08:36:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.