Test Time Adaptation for Blind Image Quality Assessment
- URL: http://arxiv.org/abs/2307.14735v3
- Date: Tue, 26 Sep 2023 12:37:06 GMT
- Title: Test Time Adaptation for Blind Image Quality Assessment
- Authors: Subhadeep Roy, Shankhanil Mitra, Soma Biswas and Rajiv Soundararajan
- Abstract summary: We introduce two novel quality-relevant auxiliary tasks at the batch and sample levels to enable TTA for blind IQA.
Our experiments reveal that even using a small batch of images from the test distribution helps achieve significant improvement in performance.
- Score: 20.50795362928567
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While the design of blind image quality assessment (IQA) algorithms has
improved significantly, the distribution shift between the training and testing
scenarios often leads to a poor performance of these methods at inference time.
This motivates the study of test time adaptation (TTA) techniques to improve
their performance at inference time. Existing auxiliary tasks and loss
functions used for TTA may not be relevant for quality-aware adaptation of the
pre-trained model. In this work, we introduce two novel quality-relevant
auxiliary tasks at the batch and sample levels to enable TTA for blind IQA. In
particular, we introduce a group contrastive loss at the batch level and a
relative rank loss at the sample level to make the model quality aware and
adapt to the target data. Our experiments reveal that even using a small batch
of images from the test distribution helps achieve significant improvement in
performance by updating the batch normalization statistics of the source model.
Related papers
- DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Multi-Modal Prompt Learning on Blind Image Quality Assessment [65.0676908930946]
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
Traditional methods, hindered by a lack of sufficiently annotated data, have employed the CLIP image-text pretraining model as their backbone to gain semantic awareness.
Recent approaches have attempted to address this mismatch using prompt technology, but these solutions have shortcomings.
This paper introduces an innovative multi-modal prompt-based methodology for IQA.
arXiv Detail & Related papers (2024-04-23T11:45:32Z) - GraFIQs: Face Image Quality Assessment Using Gradient Magnitudes [9.170455788675836]
Face Image Quality Assessment (FIQA) estimates the utility of face images for automated face recognition (FR) systems.
We propose in this work a novel approach to assess the quality of face images based on inspecting the required changes in the pre-trained FR model weights.
arXiv Detail & Related papers (2024-04-18T14:07:08Z) - IG-FIQA: Improving Face Image Quality Assessment through Intra-class
Variance Guidance robust to Inaccurate Pseudo-Labels [13.567049202308981]
We present IG-FIQA, a novel approach to guide FIQA training, introducing a weight parameter to alleviate the adverse impact of these classes.
On various benchmark datasets, our proposed method, IG-FIQA, achieved novel state-of-the-art (SOTA) performance.
arXiv Detail & Related papers (2024-03-13T05:15:43Z) - A Lightweight Parallel Framework for Blind Image Quality Assessment [7.9562077122537875]
We propose a lightweight parallel framework (LPF) for blind image quality assessment (BIQA)
First, we extract the visual features using a pre-trained feature extraction network. Furthermore, we construct a simple yet effective feature embedding network (FEN) to transform the visual features.
We present two novel self-supervised subtasks, including a sample-level category prediction task and a batch-level quality comparison task.
arXiv Detail & Related papers (2024-02-19T10:56:58Z) - A Quality Aware Sample-to-Sample Comparison for Face Recognition [13.96448286983864]
This work integrates a quality-aware learning process at the sample level into the classification training paradigm (QAFace)
Our method adaptively finds and assigns more attention to the recognizable low-quality samples in the training datasets.
arXiv Detail & Related papers (2023-06-06T20:28:04Z) - Quality-aware Pre-trained Models for Blind Image Quality Assessment [15.566552014530938]
Blind image quality assessment (BIQA) aims to automatically evaluate the perceived quality of a single image.
In this paper, we propose to solve the problem by a pretext task customized for BIQA in a self-supervised learning manner.
arXiv Detail & Related papers (2023-03-01T13:52:40Z) - DELTA: degradation-free fully test-time adaptation [59.74287982885375]
We find that two unfavorable defects are concealed in the prevalent adaptation methodologies like test-time batch normalization (BN) and self-learning.
First, we reveal that the normalization statistics in test-time BN are completely affected by the currently received test samples, resulting in inaccurate estimates.
Second, we show that during test-time adaptation, the parameter update is biased towards some dominant classes.
arXiv Detail & Related papers (2023-01-30T15:54:00Z) - Efficient Test-Time Model Adaptation without Forgetting [60.36499845014649]
Test-time adaptation seeks to tackle potential distribution shifts between training and testing data.
We propose an active sample selection criterion to identify reliable and non-redundant samples.
We also introduce a Fisher regularizer to constrain important model parameters from drastic changes.
arXiv Detail & Related papers (2022-04-06T06:39:40Z) - Task-Specific Normalization for Continual Learning of Blind Image
Quality Models [105.03239956378465]
We present a simple yet effective continual learning method for blind image quality assessment (BIQA)
The key step in our approach is to freeze all convolution filters of a pre-trained deep neural network (DNN) for an explicit promise of stability.
We assign each new IQA dataset (i.e., task) a prediction head, and load the corresponding normalization parameters to produce a quality score.
The final quality estimate is computed by black a weighted summation of predictions from all heads with a lightweight $K$-means gating mechanism.
arXiv Detail & Related papers (2021-07-28T15:21:01Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.