Norm-in-Norm Loss with Faster Convergence and Better Performance for
Image Quality Assessment
- URL: http://arxiv.org/abs/2008.03889v1
- Date: Mon, 10 Aug 2020 04:01:21 GMT
- Title: Norm-in-Norm Loss with Faster Convergence and Better Performance for
Image Quality Assessment
- Authors: Dingquan Li, Tingting Jiang and Ming Jiang
- Abstract summary: We explore normalization in the design of loss functions for image quality assessment (IQA) models.
The resulting "Norm-in-Norm'' loss encourages the IQA model to make linear predictions with respect to subjective quality scores.
Experiments on two relevant datasets show that, compared to MAE or MSE loss, the new loss enables the IQA model to converge about 10 times faster.
- Score: 20.288424566444224
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Currently, most image quality assessment (IQA) models are supervised by the
MAE or MSE loss with empirically slow convergence. It is well-known that
normalization can facilitate fast convergence. Therefore, we explore
normalization in the design of loss functions for IQA. Specifically, we first
normalize the predicted quality scores and the corresponding subjective quality
scores. Then, the loss is defined based on the norm of the differences between
these normalized values. The resulting "Norm-in-Norm'' loss encourages the IQA
model to make linear predictions with respect to subjective quality scores.
After training, the least squares regression is applied to determine the linear
mapping from the predicted quality to the subjective quality. It is shown that
the new loss is closely connected with two common IQA performance criteria
(PLCC and RMSE). Through theoretical analysis, it is proved that the embedded
normalization makes the gradients of the loss function more stable and more
predictable, which is conducive to the faster convergence of the IQA model.
Furthermore, to experimentally verify the effectiveness of the proposed loss,
it is applied to solve a challenging problem: quality assessment of in-the-wild
images. Experiments on two relevant datasets (KonIQ-10k and CLIVE) show that,
compared to MAE or MSE loss, the new loss enables the IQA model to converge
about 10 times faster and the final model achieves better performance. The
proposed model also achieves state-of-the-art prediction performance on this
challenging problem. For reproducible scientific research, our code is publicly
available at https://github.com/lidq92/LinearityIQA.
Related papers
- Boosting CLIP Adaptation for Image Quality Assessment via Meta-Prompt Learning and Gradient Regularization [55.09893295671917]
This paper introduces a novel Gradient-Regulated Meta-Prompt IQA Framework (GRMP-IQA)
The GRMP-IQA comprises two key modules: Meta-Prompt Pre-training Module and Quality-Aware Gradient Regularization.
Experiments on five standard BIQA datasets demonstrate the superior performance to the state-of-the-art BIQA methods under limited data setting.
arXiv Detail & Related papers (2024-09-09T07:26:21Z) - DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - DifFIQA: Face Image Quality Assessment Using Denoising Diffusion
Probabilistic Models [1.217503190366097]
Face image quality assessment (FIQA) techniques aim to mitigate these performance degradations.
We present a powerful new FIQA approach, named DifFIQA, which relies on denoising diffusion probabilistic models (DDPM)
Because the diffusion-based perturbations are computationally expensive, we also distill the knowledge encoded in DifFIQA into a regression-based quality predictor, called DifFIQA(R)
arXiv Detail & Related papers (2023-05-09T21:03:13Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Degraded Reference Image Quality Assessment [23.871178105179883]
We make one of the first attempts to establish a new paradigm named degraded-reference IQA (DR IQA)
Specifically, we lay out the architectures of DR IQA and introduce a 6-bit code to denote the choices of configurations.
We construct the first large-scale databases dedicated to DR IQA and will make them publicly available.
arXiv Detail & Related papers (2021-10-28T05:50:59Z) - Task-Specific Normalization for Continual Learning of Blind Image
Quality Models [105.03239956378465]
We present a simple yet effective continual learning method for blind image quality assessment (BIQA)
The key step in our approach is to freeze all convolution filters of a pre-trained deep neural network (DNN) for an explicit promise of stability.
We assign each new IQA dataset (i.e., task) a prediction head, and load the corresponding normalization parameters to produce a quality score.
The final quality estimate is computed by black a weighted summation of predictions from all heads with a lightweight $K$-means gating mechanism.
arXiv Detail & Related papers (2021-07-28T15:21:01Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z) - MetaIQA: Deep Meta-learning for No-Reference Image Quality Assessment [73.55944459902041]
This paper presents a no-reference IQA metric based on deep meta-learning.
We first collect a number of NR-IQA tasks for different distortions.
Then meta-learning is adopted to learn the prior knowledge shared by diversified distortions.
Extensive experiments demonstrate that the proposed metric outperforms the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2020-04-11T23:36:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.