A Lightweight Ensemble-Based Face Image Quality Assessment Method with Correlation-Aware Loss
- URL: http://arxiv.org/abs/2509.10114v1
- Date: Fri, 12 Sep 2025 10:13:38 GMT
- Title: A Lightweight Ensemble-Based Face Image Quality Assessment Method with Correlation-Aware Loss
- Authors: MohammadAli Hamidi, Hadi Amirpour, Luigi Atzori, Christian Timmerer,
- Abstract summary: Face image quality assessment (FIQA) plays a critical role in face recognition and verification systems.<n>We propose a lightweight and efficient method for FIQA, designed for the perceptual evaluation of face images in the wild.
- Score: 14.915614314380578
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Face image quality assessment (FIQA) plays a critical role in face recognition and verification systems, especially in uncontrolled, real-world environments. Although several methods have been proposed, general-purpose no-reference image quality assessment techniques often fail to capture face-specific degradations. Meanwhile, state-of-the-art FIQA models tend to be computationally intensive, limiting their practical applicability. We propose a lightweight and efficient method for FIQA, designed for the perceptual evaluation of face images in the wild. Our approach integrates an ensemble of two compact convolutional neural networks, MobileNetV3-Small and ShuffleNetV2, with prediction-level fusion via simple averaging. To enhance alignment with human perceptual judgments, we employ a correlation-aware loss (MSECorrLoss), combining mean squared error (MSE) with a Pearson correlation regularizer. Our method achieves a strong balance between accuracy and computational cost, making it suitable for real-world deployment. Experiments on the VQualA FIQA benchmark demonstrate that our model achieves a Spearman rank correlation coefficient (SRCC) of 0.9829 and a Pearson linear correlation coefficient (PLCC) of 0.9894, remaining within competition efficiency constraints.
Related papers
- Continual Action Quality Assessment via Adaptive Manifold-Aligned Graph Regularization [53.82400605816587]
Action Quality Assessment (AQA) quantifies human actions in videos, supporting applications in sports scoring, rehabilitation, and skill evaluation.<n>A major challenge lies in the non-stationary nature of quality distributions in real-world scenarios.<n>We introduce Continual AQA (CAQA), which equips with Continual Learning capabilities to handle evolving distributions.
arXiv Detail & Related papers (2025-10-08T10:09:47Z) - No-Reference Image Contrast Assessment with Customized EfficientNet-B0 [3.4527546378946]
No reference image quality assessment NR IQA models struggled to accurately evaluate contrast distortions under diverse real world conditions.<n>In this study, we proposed a deep learning based framework for blind contrast quality assessment.<n>Models are modified with a contrast-aware regression head and trained end to end using targeted data augmentations.
arXiv Detail & Related papers (2025-09-26T06:54:37Z) - Hybrid Image Resolution Quality Metric (HIRQM):A Comprehensive Perceptual Image Quality Assessment Framework [0.0]
We propose the Hybrid Image Resolution Quality Metric (HIRQM) to integrate statistical, multi-scale, and deep learning methods for a comprehensive quality evaluation.<n>A dynamic weighting mechanism adapts component contributions based on image characteristics like brightness and variance, enhancing flexibility across distortion types.<n> evaluated on TID2013 and LIVE datasets, HIRQM Pearson and Spearman correlations of 0.92 and 0.90, outperforming traditional metrics.
arXiv Detail & Related papers (2025-05-04T06:14:10Z) - Augmenting Perceptual Super-Resolution via Image Quality Predictors [10.586351810396645]
Super-resolution (SR) is inherently ill-posed, inducing a distribution of plausible solutions for every input.<n>In this work, we explore an alternative: utilizing powerful non-reference image quality assessment (NR-IQA) models in the SR context.<n>Our results demonstrate a more human-centric perception-distortion tradeoff, focusing less on non-perceptual pixel-wise distortion, instead improving the balance between perceptual fidelity and human-tuned NR-IQA measures.
arXiv Detail & Related papers (2025-04-25T17:47:38Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - Beyond Score Changes: Adversarial Attack on No-Reference Image Quality Assessment from Two Perspectives [15.575900555433863]
We introduce a new framework of correlation-error-based attacks that perturb both the correlation within an image set and score changes on individual images.
Our research focuses on ranking-related correlation metrics like Spearman's Rank-Order Correlation Coefficient (SROCC) and prediction error-related metrics like Mean Squared Error (MSE)
arXiv Detail & Related papers (2024-04-20T05:24:06Z) - Contrastive Pre-Training with Multi-View Fusion for No-Reference Point Cloud Quality Assessment [49.36799270585947]
No-reference point cloud quality assessment (NR-PCQA) aims to automatically evaluate the perceptual quality of distorted point clouds without available reference.
We propose a novel contrastive pre-training framework tailored for PCQA (CoPA)
Our method outperforms the state-of-the-art PCQA methods on popular benchmarks.
arXiv Detail & Related papers (2024-03-15T07:16:07Z) - When No-Reference Image Quality Models Meet MAP Estimation in Diffusion Latents [92.45867913876691]
No-reference image quality assessment (NR-IQA) models can effectively quantify perceived image quality.<n>We show that NR-IQA models can be plugged into the maximum a posteriori (MAP) estimation framework for image enhancement.
arXiv Detail & Related papers (2024-03-11T03:35:41Z) - GMC-IQA: Exploiting Global-correlation and Mean-opinion Consistency for
No-reference Image Quality Assessment [40.33163764161929]
We construct a novel loss function and network to exploit Global-correlation and Mean-opinion Consistency.
We propose a novel GCC loss by defining a pairwise preference-based rank estimation to solve the non-differentiable problem of SROCC.
We also propose a mean-opinion network, which integrates diverse opinion features to alleviate the randomness of weight learning.
arXiv Detail & Related papers (2024-01-19T06:03:01Z) - Adaptive Feature Selection for No-Reference Image Quality Assessment by Mitigating Semantic Noise Sensitivity [55.399230250413986]
We propose a Quality-Aware Feature Matching IQA Metric (QFM-IQM) to remove harmful semantic noise features from the upstream task.<n>Our approach achieves superior performance to the state-of-the-art NR-IQA methods on eight standard IQA datasets.
arXiv Detail & Related papers (2023-12-11T06:50:27Z) - You Only Train Once: A Unified Framework for Both Full-Reference and No-Reference Image Quality Assessment [45.62136459502005]
We propose a network to perform full reference (FR) and no reference (NR) IQA.
We first employ an encoder to extract multi-level features from input images.
A Hierarchical Attention (HA) module is proposed as a universal adapter for both FR and NR inputs.
A Semantic Distortion Aware (SDA) module is proposed to examine feature correlations between shallow and deep layers of the encoder.
arXiv Detail & Related papers (2023-10-14T11:03:04Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.