Underwater Image Quality Assessment: A Perceptual Framework Guided by Physical Imaging
- URL: http://arxiv.org/abs/2412.15527v1
- Date: Fri, 20 Dec 2024 03:31:45 GMT
- Title: Underwater Image Quality Assessment: A Perceptual Framework Guided by Physical Imaging
- Authors: Weizhi Xian, Mingliang Zhou, Leong Hou U, Lang Shujun, Bin Fang, Tao Xiang, Zhaowei Shang,
- Abstract summary: We propose a physically imaging-guided framework for underwater image quality assessment (UIQA) called PIGUIQA.
We incorporate advanced physics-based underwater imaging estimation into our method and define distortion metrics that measure the impact of direct transmission attenuation and backwards scattering on image quality.
PIGUIQA achieves state-of-the-art performance in underwater image quality prediction and exhibits strong generalizability.
- Score: 52.860312888450096
- License:
- Abstract: In this paper, we propose a physically imaging-guided framework for underwater image quality assessment (UIQA), called PIGUIQA. First, we formulate UIQA as a comprehensive problem that considers the combined effects of direct transmission attenuation and backwards scattering on image perception. On this basis, we incorporate advanced physics-based underwater imaging estimation into our method and define distortion metrics that measure the impact of direct transmission attenuation and backwards scattering on image quality. Second, acknowledging the significant content differences across various regions of an image and the varying perceptual sensitivity to distortions in these regions, we design a local perceptual module on the basis of the neighborhood attention mechanism. This module effectively captures subtle features in images, thereby enhancing the adaptive perception of distortions on the basis of local information. Finally, by employing a global perceptual module to further integrate the original image content with underwater image distortion information, the proposed model can accurately predict the image quality score. Comprehensive experiments demonstrate that PIGUIQA achieves state-of-the-art performance in underwater image quality prediction and exhibits strong generalizability. The code for PIGUIQA is available on https://anonymous.4open.science/r/PIGUIQA-A465/
Related papers
- DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Learning Heavily-Degraded Prior for Underwater Object Detection [59.5084433933765]
This paper seeks transferable prior knowledge from detector-friendly images.
It is based on statistical observations that, the heavily degraded regions of detector-friendly (DFUI) and underwater images have evident feature distribution gaps.
Our method with higher speeds and less parameters still performs better than transformer-based detectors.
arXiv Detail & Related papers (2023-08-24T12:32:46Z) - WaterFlow: Heuristic Normalizing Flow for Underwater Image Enhancement
and Beyond [52.27796682972484]
Existing underwater image enhancement methods mainly focus on image quality improvement, ignoring the effect on practice.
We propose a normalizing flow for detection-driven underwater image enhancement, dubbed WaterFlow.
Considering the differentiability and interpretability, we incorporate the prior into the data-driven mapping procedure.
arXiv Detail & Related papers (2023-08-02T04:17:35Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - DeepWSD: Projecting Degradations in Perceptual Space to Wasserstein
Distance in Deep Feature Space [67.07476542850566]
We propose to model the quality degradation in perceptual space from a statistical distribution perspective.
The quality is measured based upon the Wasserstein distance in the deep feature domain.
The deep Wasserstein distance (DeepWSD) performed on features from neural networks enjoys better interpretability of the quality contamination.
arXiv Detail & Related papers (2022-08-05T02:46:12Z) - UIF: An Objective Quality Assessment for Underwater Image Enhancement [17.145844358253164]
We propose an Underwater Image Fidelity (UIF) metric for objective evaluation of enhanced underwater images.
By exploiting the statistical features of these images, we present to extract naturalness-related, sharpness-related, and structure-related features.
Experimental results confirm that the proposed UIF outperforms a variety of underwater and general-purpose image quality metrics.
arXiv Detail & Related papers (2022-05-19T08:43:47Z) - Domain Adaptive Adversarial Learning Based on Physics Model Feedback for
Underwater Image Enhancement [10.143025577499039]
We propose a new robust adversarial learning framework via physics model based feedback control and domain adaptation mechanism for enhancing underwater images.
A new method for simulating underwater-like training dataset from RGB-D data by underwater image formation model is proposed.
Final enhanced results on synthetic and real underwater images demonstrate the superiority of the proposed method.
arXiv Detail & Related papers (2020-02-20T07:50:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.