Image Quality Assessment: Enhancing Perceptual Exploration and Interpretation with Collaborative Feature Refinement and Hausdorff distance
- URL: http://arxiv.org/abs/2412.15847v1
- Date: Fri, 20 Dec 2024 12:39:49 GMT
- Title: Image Quality Assessment: Enhancing Perceptual Exploration and Interpretation with Collaborative Feature Refinement and Hausdorff distance
- Authors: Xuekai Wei, Junyu Zhang, Qinlin Hu, Mingliang Zhou\\Yong Feng, Weizhi Xian, Huayan Pu, Sam Kwong,
- Abstract summary: Current full-reference image quality assessment (FR-IQA) methods often fuse features from reference and distorted images.
This work introduces a pioneering training-free FR-IQA method that accurately predicts image quality in alignment with the human visual system.
- Score: 47.01352278293561
- License:
- Abstract: Current full-reference image quality assessment (FR-IQA) methods often fuse features from reference and distorted images, overlooking that color and luminance distortions occur mainly at low frequencies, whereas edge and texture distortions occur at high frequencies. This work introduces a pioneering training-free FR-IQA method that accurately predicts image quality in alignment with the human visual system (HVS) by leveraging a novel perceptual degradation modelling approach to address this limitation. First, a collaborative feature refinement module employs a carefully designed wavelet transform to extract perceptually relevant features, capturing multiscale perceptual information and mimicking how the HVS analyses visual information at various scales and orientations in the spatial and frequency domains. Second, a Hausdorff distance-based distribution similarity measurement module robustly assesses the discrepancy between the feature distributions of the reference and distorted images, effectively handling outliers and variations while mimicking the ability of HVS to perceive and tolerate certain levels of distortion. The proposed method accurately captures perceptual quality differences without requiring training data or subjective quality scores. Extensive experiments on multiple benchmark datasets demonstrate superior performance compared with existing state-of-the-art approaches, highlighting its ability to correlate strongly with the HVS.\footnote{The code is available at \url{https://anonymous.4open.science/r/CVPR2025-F339}.}
Related papers
- From Images to Point Clouds: An Efficient Solution for Cross-media Blind Quality Assessment without Annotated Training [35.45364402708792]
We present a novel quality assessment method which can predict the perceptual quality of point clouds from new scenes without available annotations.
Recognizing the human visual system (HVS) as the decision-maker in quality assessment regardless of media types, we can emulate the evaluation criteria for human perception via neural networks.
We propose the distortion-guided biased feature alignment which integrates existing/estimated distortion distribution into the adversarial DA framework.
arXiv Detail & Related papers (2025-01-23T05:15:10Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Perception-Guided Quality Metric of 3D Point Clouds Using Hybrid Strategy [38.942691194229724]
Full-reference point cloud quality assessment (FR-PCQA) aims to infer the quality of distorted point clouds with available references.
Most of the existing FR-PCQA metrics ignore the fact that the human visual system (HVS) dynamically tackles visual information according to different distortion levels.
We propose a perception-guided hybrid metric (PHM) that adaptively leverages two visual strategies with respect to distortion degree to predict point cloud quality.
arXiv Detail & Related papers (2024-07-04T12:23:39Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - Adaptive Feature Selection for No-Reference Image Quality Assessment by Mitigating Semantic Noise Sensitivity [55.399230250413986]
We propose a Quality-Aware Feature Matching IQA Metric (QFM-IQM) to remove harmful semantic noise features from the upstream task.
Our approach achieves superior performance to the state-of-the-art NR-IQA methods on eight standard IQA datasets.
arXiv Detail & Related papers (2023-12-11T06:50:27Z) - FS-BAND: A Frequency-Sensitive Banding Detector [55.59101150019851]
Banding artifact, as known as staircase-like contour, is a common quality annoyance that happens in compression, transmission, etc.
We propose a no-reference banding detection model to capture and evaluate banding artifacts, called the Frequency-Sensitive BANding Detector (FS-BAND)
Experimental results show that the proposed FS-BAND method outperforms state-of-the-art image quality assessment (IQA) approaches with higher accuracy in banding classification task.
arXiv Detail & Related papers (2023-11-30T03:20:42Z) - ARNIQA: Learning Distortion Manifold for Image Quality Assessment [28.773037051085318]
No-Reference Image Quality Assessment (NR-IQA) aims to develop methods to measure image quality in alignment with human perception without the need for a high-quality reference image.
We propose a self-supervised approach named ARNIQA for modeling the image distortion manifold to obtain quality representations in an intrinsic manner.
arXiv Detail & Related papers (2023-10-20T17:22:25Z) - High Dynamic Range Image Quality Assessment Based on Frequency Disparity [78.36555631446448]
An image quality assessment (IQA) algorithm based on frequency disparity for high dynamic range ( HDR) images is proposed.
The proposed LGFM can provide a higher consistency with the subjective perception compared with the state-of-the-art HDR IQA methods.
arXiv Detail & Related papers (2022-09-06T08:22:13Z) - Quality Map Fusion for Adversarial Learning [23.465747123791772]
We improve image quality adversarially by introducing a novel quality map fusion technique.
We extend the widely adopted l2 Wasserstein distance metric to other preferable quality norms.
We also show that incorporating a perceptual attention mechanism (PAM) that extracts global feature embeddings from the network bottleneck translate to a better image quality.
arXiv Detail & Related papers (2021-10-24T03:01:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.