Image Quality Assessment: Integrating Model-Centric and Data-Centric
Approaches
- URL: http://arxiv.org/abs/2207.14769v2
- Date: Fri, 8 Dec 2023 10:36:21 GMT
- Title: Image Quality Assessment: Integrating Model-Centric and Data-Centric
Approaches
- Authors: Peibei Cao, Dingquan Li, and Kede Ma
- Abstract summary: Learning-based image quality assessment (IQA) has made remarkable progress in the past decade.
Nearly all consider the two key components -- model and data -- in isolation.
- Score: 20.931709027443706
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning-based image quality assessment (IQA) has made remarkable progress in
the past decade, but nearly all consider the two key components -- model and
data -- in isolation. Specifically, model-centric IQA focuses on developing
``better'' objective quality methods on fixed and extensively reused datasets,
with a great danger of overfitting. Data-centric IQA involves conducting
psychophysical experiments to construct ``better'' human-annotated datasets,
which unfortunately ignores current IQA models during dataset creation. In this
paper, we first design a series of experiments to probe computationally that
such isolation of model and data impedes further progress of IQA. We then
describe a computational framework that integrates model-centric and
data-centric IQA. As a specific example, we design computational modules to
quantify the sampling-worthiness of candidate images. Experimental results show
that the proposed sampling-worthiness module successfully spots diverse
failures of the examined blind IQA models, which are indeed worthy samples to
be included in next-generation datasets.
Related papers
- Boosting CLIP Adaptation for Image Quality Assessment via Meta-Prompt Learning and Gradient Regularization [55.09893295671917]
This paper introduces a novel Gradient-Regulated Meta-Prompt IQA Framework (GRMP-IQA)
The GRMP-IQA comprises two key modules: Meta-Prompt Pre-training Module and Quality-Aware Gradient Regularization.
Experiments on five standard BIQA datasets demonstrate the superior performance to the state-of-the-art BIQA methods under limited data setting.
arXiv Detail & Related papers (2024-09-09T07:26:21Z) - Sliced Maximal Information Coefficient: A Training-Free Approach for Image Quality Assessment Enhancement [12.628718661568048]
We aim to explore a generalized human visual attention estimation strategy to mimic the process of human quality rating.
In particular, we model human attention generation by measuring the statistical dependency between the degraded image and the reference image.
Experimental results verify the performance of existing IQA models can be consistently improved when our attention module is incorporated.
arXiv Detail & Related papers (2024-08-19T11:55:32Z) - Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Analysis of Video Quality Datasets via Design of Minimalistic Video Quality Models [71.06007696593704]
Blind quality assessment (BVQA) plays an indispensable role in monitoring and improving the end-users' viewing experience in real-world video-enabled media applications.
As an experimental field, the improvements of BVQA models have been measured primarily on a few human-rated VQA datasets.
We conduct a first-of-its-kind computational analysis of VQA datasets via minimalistic BVQA models.
arXiv Detail & Related papers (2023-07-26T06:38:33Z) - Learning from Mixed Datasets: A Monotonic Image Quality Assessment Model [17.19991754976893]
We propose a monotonic neural network for IQA model learning with different datasets combined.
In particular, our model consists of a dataset-shared quality regressor and several dataset-specific quality transformers.
arXiv Detail & Related papers (2022-09-21T15:53:59Z) - Learning brain MRI quality control: a multi-factorial generalization
problem [0.0]
This work aimed at evaluating the performances of the MRIQC pipeline on various large-scale datasets.
We focused our analysis on the MRIQC preprocessing steps and tested the pipeline with and without them.
We concluded that a model trained with data from a heterogeneous population, such as the CATI dataset, provides the best scores on unseen data.
arXiv Detail & Related papers (2022-05-31T15:46:44Z) - Comparing Test Sets with Item Response Theory [53.755064720563]
We evaluate 29 datasets using predictions from 18 pretrained Transformer models on individual test examples.
We find that Quoref, HellaSwag, and MC-TACO are best suited for distinguishing among state-of-the-art models.
We also observe span selection task format, which is used for QA datasets like QAMR or SQuAD2.0, is effective in differentiating between strong and weak models.
arXiv Detail & Related papers (2021-06-01T22:33:53Z) - Continual Learning for Blind Image Quality Assessment [80.55119990128419]
Blind image quality assessment (BIQA) models fail to continually adapt to subpopulation shift.
Recent work suggests training BIQA methods on the combination of all available human-rated IQA datasets.
We formulate continual learning for BIQA, where a model learns continually from a stream of IQA datasets.
arXiv Detail & Related papers (2021-02-19T03:07:01Z) - DQI: Measuring Data Quality in NLP [22.54066527822898]
We introduce a generic formula for Data Quality Index (DQI) to help dataset creators create datasets free of unwanted biases.
We show that models trained on the renovated SNLI dataset generalize better to out of distribution tasks.
arXiv Detail & Related papers (2020-05-02T12:34:17Z) - Template-Based Question Generation from Retrieved Sentences for Improved
Unsupervised Question Answering [98.48363619128108]
We propose an unsupervised approach to training QA models with generated pseudo-training data.
We show that generating questions for QA training by applying a simple template on a related, retrieved sentence rather than the original context sentence improves downstream QA performance.
arXiv Detail & Related papers (2020-04-24T17:57:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.