Image Quality Assessment: Enhancing Perceptual Exploration and Interpretation with Collaborative Feature Refinement and Hausdorff distance
- URL: http://arxiv.org/abs/2412.15847v1
- Date: Fri, 20 Dec 2024 12:39:49 GMT
- Title: Image Quality Assessment: Enhancing Perceptual Exploration and Interpretation with Collaborative Feature Refinement and Hausdorff distance
- Authors: Xuekai Wei, Junyu Zhang, Qinlin Hu, Mingliang Zhou\\Yong Feng, Weizhi Xian, Huayan Pu, Sam Kwong,
- Abstract summary: Current full-reference image quality assessment (FR-IQA) methods often fuse features from reference and distorted images.<n>This work introduces a pioneering training-free FR-IQA method that accurately predicts image quality in alignment with the human visual system.
- Score: 47.01352278293561
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current full-reference image quality assessment (FR-IQA) methods often fuse features from reference and distorted images, overlooking that color and luminance distortions occur mainly at low frequencies, whereas edge and texture distortions occur at high frequencies. This work introduces a pioneering training-free FR-IQA method that accurately predicts image quality in alignment with the human visual system (HVS) by leveraging a novel perceptual degradation modelling approach to address this limitation. First, a collaborative feature refinement module employs a carefully designed wavelet transform to extract perceptually relevant features, capturing multiscale perceptual information and mimicking how the HVS analyses visual information at various scales and orientations in the spatial and frequency domains. Second, a Hausdorff distance-based distribution similarity measurement module robustly assesses the discrepancy between the feature distributions of the reference and distorted images, effectively handling outliers and variations while mimicking the ability of HVS to perceive and tolerate certain levels of distortion. The proposed method accurately captures perceptual quality differences without requiring training data or subjective quality scores. Extensive experiments on multiple benchmark datasets demonstrate superior performance compared with existing state-of-the-art approaches, highlighting its ability to correlate strongly with the HVS.\footnote{The code is available at \url{https://anonymous.4open.science/r/CVPR2025-F339}.}
Related papers
- SING: Semantic Image Communications using Null-Space and INN-Guided Diffusion Models [52.40011613324083]
Joint source-channel coding systems (DeepJSCC) have recently demonstrated remarkable performance in wireless image transmission.
Existing methods focus on minimizing distortion between the transmitted image and the reconstructed version at the receiver, often overlooking perceptual quality.
We propose SING, a novel framework that formulates the recovery of high-quality images from corrupted reconstructions as an inverse problem.
arXiv Detail & Related papers (2025-03-16T12:32:11Z) - From Images to Point Clouds: An Efficient Solution for Cross-media Blind Quality Assessment without Annotated Training [35.45364402708792]
We present a novel quality assessment method which can predict the perceptual quality of point clouds from new scenes without available annotations.
Recognizing the human visual system (HVS) as the decision-maker in quality assessment regardless of media types, we can emulate the evaluation criteria for human perception via neural networks.
We propose the distortion-guided biased feature alignment which integrates existing/estimated distortion distribution into the adversarial DA framework.
arXiv Detail & Related papers (2025-01-23T05:15:10Z) - PIGUIQA: A Physical Imaging Guided Perceptual Framework for Underwater Image Quality Assessment [59.9103803198087]
We propose a Physical Imaging Guided perceptual framework for Underwater Image Quality Assessment (UIQA)
By leveraging underwater radiative transfer theory, we integrate physics-based imaging estimations to establish quantitative metrics for these distortions.
The proposed model accurately predicts image quality scores and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-12-20T03:31:45Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Perception-Guided Quality Metric of 3D Point Clouds Using Hybrid Strategy [38.942691194229724]
Full-reference point cloud quality assessment (FR-PCQA) aims to infer the quality of distorted point clouds with available references.
Most of the existing FR-PCQA metrics ignore the fact that the human visual system (HVS) dynamically tackles visual information according to different distortion levels.
We propose a perception-guided hybrid metric (PHM) that adaptively leverages two visual strategies with respect to distortion degree to predict point cloud quality.
arXiv Detail & Related papers (2024-07-04T12:23:39Z) - DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - FS-BAND: A Frequency-Sensitive Banding Detector [55.59101150019851]
Banding artifact, as known as staircase-like contour, is a common quality annoyance that happens in compression, transmission, etc.
We propose a no-reference banding detection model to capture and evaluate banding artifacts, called the Frequency-Sensitive BANding Detector (FS-BAND)
Experimental results show that the proposed FS-BAND method outperforms state-of-the-art image quality assessment (IQA) approaches with higher accuracy in banding classification task.
arXiv Detail & Related papers (2023-11-30T03:20:42Z) - ARNIQA: Learning Distortion Manifold for Image Quality Assessment [28.773037051085318]
No-Reference Image Quality Assessment (NR-IQA) aims to develop methods to measure image quality in alignment with human perception without the need for a high-quality reference image.
We propose a self-supervised approach named ARNIQA for modeling the image distortion manifold to obtain quality representations in an intrinsic manner.
arXiv Detail & Related papers (2023-10-20T17:22:25Z) - You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment [45.62136459502005]
We propose a network to perform full reference (FR) and no reference (NR) IQA.
We first employ an encoder to extract multi-level features from input images.
A Hierarchical Attention (HA) module is proposed as a universal adapter for both FR and NR inputs.
A Semantic Distortion Aware (SDA) module is proposed to examine feature correlations between shallow and deep layers of the encoder.
arXiv Detail & Related papers (2023-10-14T11:03:04Z) - Gap-closing Matters: Perceptual Quality Evaluation and Optimization of Low-Light Image Enhancement [55.8106019031768]
There is a growing consensus in the research community that the optimization of low-light image enhancement approaches should be guided by the visual quality perceived by end users.
We propose a gap-closing framework for assessing subjective and objective quality systematically.
We validate the effectiveness of our proposed framework through both the accuracy of quality prediction and the perceptual quality of image enhancement.
arXiv Detail & Related papers (2023-02-22T15:57:03Z) - DeepDC: Deep Distance Correlation as a Perceptual Image Quality
Evaluator [53.57431705309919]
ImageNet pre-trained deep neural networks (DNNs) show notable transferability for building effective image quality assessment (IQA) models.
We develop a novel full-reference IQA (FR-IQA) model based exclusively on pre-trained DNN features.
We conduct comprehensive experiments to demonstrate the superiority of the proposed quality model on five standard IQA datasets.
arXiv Detail & Related papers (2022-11-09T14:57:27Z) - High Dynamic Range Image Quality Assessment Based on Frequency Disparity [78.36555631446448]
An image quality assessment (IQA) algorithm based on frequency disparity for high dynamic range ( HDR) images is proposed.
The proposed LGFM can provide a higher consistency with the subjective perception compared with the state-of-the-art HDR IQA methods.
arXiv Detail & Related papers (2022-09-06T08:22:13Z) - Quality Map Fusion for Adversarial Learning [23.465747123791772]
We improve image quality adversarially by introducing a novel quality map fusion technique.
We extend the widely adopted l2 Wasserstein distance metric to other preferable quality norms.
We also show that incorporating a perceptual attention mechanism (PAM) that extracts global feature embeddings from the network bottleneck translate to a better image quality.
arXiv Detail & Related papers (2021-10-24T03:01:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.