Evaluating the Stability of Deep Image Quality Assessment With Respect
to Image Scaling
- URL: http://arxiv.org/abs/2207.09856v1
- Date: Wed, 20 Jul 2022 12:44:13 GMT
- Title: Evaluating the Stability of Deep Image Quality Assessment With Respect
to Image Scaling
- Authors: Koki Tsubota, Hiroaki Akutsu and Kiyoharu Aizawa
- Abstract summary: Image quality assessment (IQA) is a fundamental metric for image processing tasks.
In this paper, we show that the image scale is an influential factor that affects deep IQA performance.
- Score: 43.291753358414255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image quality assessment (IQA) is a fundamental metric for image processing
tasks (e.g., compression). With full-reference IQAs, traditional IQAs, such as
PSNR and SSIM, have been used. Recently, IQAs based on deep neural networks
(deep IQAs), such as LPIPS and DISTS, have also been used. It is known that
image scaling is inconsistent among deep IQAs, as some perform down-scaling as
pre-processing, whereas others instead use the original image size. In this
paper, we show that the image scale is an influential factor that affects deep
IQA performance. We comprehensively evaluate four deep IQAs on the same five
datasets, and the experimental results show that image scale significantly
influences IQA performance. We found that the most appropriate image scale is
often neither the default nor the original size, and the choice differs
depending on the methods and datasets used. We visualized the stability and
found that PieAPP is the most stable among the four deep IQAs.
Related papers
- DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Descriptive Image Quality Assessment in the Wild [25.503311093471076]
VLM-based Image Quality Assessment (IQA) seeks to describe image quality linguistically to align with human expression.
We introduce Depicted image Quality Assessment in the Wild (DepictQA-Wild)
Our method includes a multi-functional IQA task paradigm that encompasses both assessment and comparison tasks, brief and detailed responses, full-reference and non-reference scenarios.
arXiv Detail & Related papers (2024-05-29T07:49:15Z) - Dual-Branch Network for Portrait Image Quality Assessment [76.27716058987251]
We introduce a dual-branch network for portrait image quality assessment (PIQA)
We utilize two backbone networks (textiti.e., Swin Transformer-B) to extract the quality-aware features from the entire portrait image and the facial image cropped from it.
We leverage LIQE, an image scene classification and quality assessment model, to capture the quality-aware and scene-specific features as the auxiliary features.
arXiv Detail & Related papers (2024-05-14T12:43:43Z) - Reference-Free Image Quality Metric for Degradation and Reconstruction Artifacts [2.5282283486446753]
We develop a reference-free quality evaluation network, dubbed "Quality Factor (QF) Predictor"
Our QF Predictor is a lightweight, fully convolutional network comprising seven layers.
It receives JPEG compressed image patch with a random QF as input, is trained to accurately predict the corresponding QF.
arXiv Detail & Related papers (2024-05-01T22:28:18Z) - Generalized Portrait Quality Assessment [26.8378202089832]
This paper presents a learning-based approach to portrait quality assessment (PQA)
The proposed approach is validated by extensive experiments on the PIQ23 benchmark.
The source code of FHIQA will be made publicly available on the PIQ23 GitHub repository.
arXiv Detail & Related papers (2024-02-14T13:47:18Z) - Can No-reference features help in Full-reference image quality
estimation? [20.491565297561912]
We study utilization of no-reference features in Full-reference IQA task.
Our model achieves higher SRCC and KRCC scores than a number of state-of-the-art algorithms.
arXiv Detail & Related papers (2022-03-02T03:39:28Z) - Image Quality Assessment using Contrastive Learning [50.265638572116984]
We train a deep Convolutional Neural Network (CNN) using a contrastive pairwise objective to solve the auxiliary problem.
We show through extensive experiments that CONTRIQUE achieves competitive performance when compared to state-of-the-art NR image quality models.
Our results suggest that powerful quality representations with perceptual relevance can be obtained without requiring large labeled subjective image quality datasets.
arXiv Detail & Related papers (2021-10-25T21:01:00Z) - Learning Conditional Knowledge Distillation for Degraded-Reference Image
Quality Assessment [157.1292674649519]
We propose a practical solution named degraded-reference IQA (DR-IQA)
DR-IQA exploits the inputs of IR models, degraded images, as references.
Our results can even be close to the performance of full-reference settings.
arXiv Detail & Related papers (2021-08-18T02:35:08Z) - MUSIQ: Multi-scale Image Quality Transformer [22.908901641767688]
Current state-of-the-art IQA methods are based on convolutional neural networks (CNNs)
We design a multi-scale image quality Transformer (MUSIQ) to process native resolution images with varying sizes and aspect ratios.
With a multi-scale image representation, our proposed method can capture image quality at different granularities.
arXiv Detail & Related papers (2021-08-12T23:36:22Z) - MetaIQA: Deep Meta-learning for No-Reference Image Quality Assessment [73.55944459902041]
This paper presents a no-reference IQA metric based on deep meta-learning.
We first collect a number of NR-IQA tasks for different distortions.
Then meta-learning is adopted to learn the prior knowledge shared by diversified distortions.
Extensive experiments demonstrate that the proposed metric outperforms the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2020-04-11T23:36:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.