Active Fine-Tuning from gMAD Examples Improves Blind Image Quality
Assessment
- URL: http://arxiv.org/abs/2003.03849v2
- Date: Thu, 8 Apr 2021 10:45:16 GMT
- Title: Active Fine-Tuning from gMAD Examples Improves Blind Image Quality
Assessment
- Authors: Zhihua Wang and Kede Ma
- Abstract summary: We show that gMAD examples can be used to improve blind IQA (BIQA) methods.
Specifically, we first pre-train a DNN-based BIQA model using multiple noisy annotators.
We then seek pairs of images by comparing the baseline model with a set of full-reference IQA methods in gMAD.
- Score: 29.196117743753813
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The research in image quality assessment (IQA) has a long history, and
significant progress has been made by leveraging recent advances in deep neural
networks (DNNs). Despite high correlation numbers on existing IQA datasets,
DNN-based models may be easily falsified in the group maximum differentiation
(gMAD) competition with strong counterexamples being identified. Here we show
that gMAD examples can be used to improve blind IQA (BIQA) methods.
Specifically, we first pre-train a DNN-based BIQA model using multiple noisy
annotators, and fine-tune it on multiple subject-rated databases of
synthetically distorted images, resulting in a top-performing baseline model.
We then seek pairs of images by comparing the baseline model with a set of
full-reference IQA methods in gMAD. The resulting gMAD examples are most likely
to reveal the relative weaknesses of the baseline, and suggest potential ways
for refinement. We query ground truth quality annotations for the selected
images in a well controlled laboratory environment, and further fine-tune the
baseline on the combination of human-rated images from gMAD and existing
databases. This process may be iterated, enabling active and progressive
fine-tuning from gMAD examples for BIQA. We demonstrate the feasibility of our
active learning scheme on a large-scale unlabeled image set, and show that the
fine-tuned method achieves improved generalizability in gMAD, without
destroying performance on previously trained databases.
Related papers
- Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Comparison of No-Reference Image Quality Models via MAP Estimation in
Diffusion Latents [99.19391983670569]
We show that NR-IQA models can be plugged into the maximum a posteriori (MAP) estimation framework for image enhancement.
Different NR-IQA models are likely to induce different enhanced images, which are ultimately subject to psychophysical testing.
This leads to a new computational method for comparing NR-IQA models within the analysis-by-synthesis framework.
arXiv Detail & Related papers (2024-03-11T03:35:41Z) - Depicting Beyond Scores: Advancing Image Quality Assessment through Multi-modal Language Models [28.194638379354252]
We introduce a Depicted image Quality Assessment method (DepictQA), overcoming the constraints of traditional score-based methods.
DepictQA allows for detailed, language-based, human-like evaluation of image quality by leveraging Multi-modal Large Language Models.
These results showcase the research potential of multi-modal IQA methods.
arXiv Detail & Related papers (2023-12-14T14:10:02Z) - Deep Neural Networks for Blind Image Quality Assessment: Addressing the
Data Challenge [20.97986692607478]
It is difficult to create human-rated IQA datasets composed of millions of images due to constraints of subjective testing.
We construct a DNN-based BIQA model called EONSS, train it on Waterloo Exploration-II, and test it on nine subject-rated IQA datasets.
arXiv Detail & Related papers (2021-09-24T19:48:52Z) - Troubleshooting Blind Image Quality Models in the Wild [99.96661607178677]
Group maximum differentiation competition (gMAD) has been used to improve blind image quality assessment (BIQA) models.
We construct a set of "self-competitors," as random ensembles of pruned versions of the target model to be improved.
Diverse failures can then be efficiently identified via self-gMAD competition.
arXiv Detail & Related papers (2021-05-14T10:10:48Z) - Continual Learning for Blind Image Quality Assessment [80.55119990128419]
Blind image quality assessment (BIQA) models fail to continually adapt to subpopulation shift.
Recent work suggests training BIQA methods on the combination of all available human-rated IQA datasets.
We formulate continual learning for BIQA, where a model learns continually from a stream of IQA datasets.
arXiv Detail & Related papers (2021-02-19T03:07:01Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z) - MetaIQA: Deep Meta-learning for No-Reference Image Quality Assessment [73.55944459902041]
This paper presents a no-reference IQA metric based on deep meta-learning.
We first collect a number of NR-IQA tasks for different distortions.
Then meta-learning is adopted to learn the prior knowledge shared by diversified distortions.
Extensive experiments demonstrate that the proposed metric outperforms the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2020-04-11T23:36:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.