Enhancing Underwater Images Using Deep Learning with Subjective Image Quality Integration
- URL: http://arxiv.org/abs/2507.05393v1
- Date: Mon, 07 Jul 2025 18:25:13 GMT
- Title: Enhancing Underwater Images Using Deep Learning with Subjective Image Quality Integration
- Authors: Jose M. Montero, Jose-Luis Lisani,
- Abstract summary: This paper presents a deep learning-based approach to improving underwater image quality.<n>We use publicly available datasets containing underwater images labeled by experts as either high or low quality.<n>Results demonstrate that the proposed model achieves substantial improvements in both perceived and measured image quality.
- Score: 0.8287206589886879
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent advances in deep learning, particularly neural networks, have significantly impacted a wide range of fields, including the automatic enhancement of underwater images. This paper presents a deep learning-based approach to improving underwater image quality by integrating human subjective assessments into the training process. To this end, we utilize publicly available datasets containing underwater images labeled by experts as either high or low quality. Our method involves first training a classifier network to distinguish between high- and low-quality images. Subsequently, generative adversarial networks (GANs) are trained using various enhancement criteria to refine the low-quality images. The performance of the GAN models is evaluated using quantitative metrics such as PSNR, SSIM, and UIQM, as well as through qualitative analysis. Results demonstrate that the proposed model -- particularly when incorporating criteria such as color fidelity and image sharpness -- achieves substantial improvements in both perceived and measured image quality.
Related papers
- DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [73.6767681305851]
Blind image quality assessment (IQA) in the wild presents significant challenges.<n>Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.<n>Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Diffusion Model Based Visual Compensation Guidance and Visual Difference Analysis for No-Reference Image Quality Assessment [78.21609845377644]
We propose a novel class of state-of-the-art (SOTA) generative model, which exhibits the capability to model intricate relationships.<n>We devise a new diffusion restoration network that leverages the produced enhanced image and noise-containing images.<n>Two visual evaluation branches are designed to comprehensively analyze the obtained high-level feature information.
arXiv Detail & Related papers (2024-02-22T09:39:46Z) - PUGAN: Physical Model-Guided Underwater Image Enhancement Using GAN with
Dual-Discriminators [120.06891448820447]
How to obtain clear and visually pleasant images has become a common concern of people.
The task of underwater image enhancement (UIE) has also emerged as the times require.
In this paper, we propose a physical model-guided GAN model for UIE, referred to as PUGAN.
Our PUGAN outperforms state-of-the-art methods in both qualitative and quantitative metrics.
arXiv Detail & Related papers (2023-06-15T07:41:12Z) - Re-IQA: Unsupervised Learning for Image Quality Assessment in the Wild [38.197794061203055]
We propose a Mixture of Experts approach to train two separate encoders to learn high-level content and low-level image quality features in an unsupervised setting.
We deploy the complementary low and high-level image representations obtained from the Re-IQA framework to train a linear regression model.
Our method achieves state-of-the-art performance on multiple large-scale image quality assessment databases.
arXiv Detail & Related papers (2023-04-02T05:06:51Z) - Adaptive deep learning framework for robust unsupervised underwater image enhancement [3.0516727053033392]
One of the main challenges in deep learning-based underwater image enhancement is the limited availability of high-quality training data.<n>We propose a novel unsupervised underwater image enhancement framework that employs a conditional variational autoencoder (cVAE) to train a deep learning model.<n>We show that our proposed framework yields competitive performance compared to other state-of-the-art approaches in quantitative as well as qualitative metrics.
arXiv Detail & Related papers (2022-12-18T01:07:20Z) - Semantic-aware Texture-Structure Feature Collaboration for Underwater
Image Enhancement [58.075720488942125]
Underwater image enhancement has become an attractive topic as a significant technology in marine engineering and aquatic robotics.
We develop an efficient and compact enhancement network in collaboration with a high-level semantic-aware pretrained model.
We also apply the proposed algorithm to the underwater salient object detection task to reveal the favorable semantic-aware ability for high-level vision tasks.
arXiv Detail & Related papers (2022-11-19T07:50:34Z) - UIF: An Objective Quality Assessment for Underwater Image Enhancement [17.145844358253164]
We propose an Underwater Image Fidelity (UIF) metric for objective evaluation of enhanced underwater images.
By exploiting the statistical features of these images, we present to extract naturalness-related, sharpness-related, and structure-related features.
Experimental results confirm that the proposed UIF outperforms a variety of underwater and general-purpose image quality metrics.
arXiv Detail & Related papers (2022-05-19T08:43:47Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Deep Multi-Scale Features Learning for Distorted Image Quality
Assessment [20.7146855562825]
Existing deep neural networks (DNNs) have shown significant effectiveness for tackling the IQA problem.
We propose to use pyramid features learning to build a DNN with hierarchical multi-scale features for distorted image quality prediction.
Our proposed network is optimized in a deep end-to-end supervision manner.
arXiv Detail & Related papers (2020-12-01T23:39:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.