JNDMix: JND-Based Data Augmentation for No-reference Image Quality
Assessment
- URL: http://arxiv.org/abs/2302.09838v1
- Date: Mon, 20 Feb 2023 08:55:00 GMT
- Title: JNDMix: JND-Based Data Augmentation for No-reference Image Quality
Assessment
- Authors: Jiamu Sheng, Jiayuan Fan, Peng Ye, Jianjian Cao
- Abstract summary: We propose effective and general data augmentation based on just noticeable difference (JND) noise mixing for NR-IQA task.
In detail, we randomly inject the JND noise, imperceptible to the human visual system (HVS), into the training image without any adjustment to its label.
Extensive experiments demonstrate that JNDMix significantly improves the performance and data efficiency of various state-of-the-art NR-IQA models.
- Score: 5.0789200970424035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite substantial progress in no-reference image quality assessment
(NR-IQA), previous training models often suffer from over-fitting due to the
limited scale of used datasets, resulting in model performance bottlenecks. To
tackle this challenge, we explore the potential of leveraging data augmentation
to improve data efficiency and enhance model robustness. However, most existing
data augmentation methods incur a serious issue, namely that it alters the
image quality and leads to training images mismatching with their original
labels. Additionally, although only a few data augmentation methods are
available for NR-IQA task, their ability to enrich dataset diversity is still
insufficient. To address these issues, we propose a effective and general data
augmentation based on just noticeable difference (JND) noise mixing for NR-IQA
task, named JNDMix. In detail, we randomly inject the JND noise, imperceptible
to the human visual system (HVS), into the training image without any
adjustment to its label. Extensive experiments demonstrate that JNDMix
significantly improves the performance and data efficiency of various
state-of-the-art NR-IQA models and the commonly used baseline models, as well
as the generalization ability. More importantly, JNDMix facilitates MANIQA to
achieve the state-of-the-art performance on LIVEC and KonIQ-10k.
Related papers
- Rejection Sampling IMLE: Designing Priors for Better Few-Shot Image
Synthesis [7.234618871984921]
An emerging area of research aims to learn deep generative models with limited training data.
We propose RS-IMLE, a novel approach that changes the prior distribution used for training.
This leads to substantially higher quality image generation compared to existing GAN and IMLE-based methods.
arXiv Detail & Related papers (2024-09-26T00:19:42Z) - MSLIQA: Enhancing Learning Representations for Image Quality Assessment through Multi-Scale Learning [6.074775040047959]
We improve the performance of a generic lightweight NR-IQA model by introducing a novel augmentation strategy.
This augmentation strategy enables the network to better discriminate between different distortions in various parts of the image by zooming in and out.
The inclusion of test-time augmentation further enhances performance, making our lightweight network's results comparable to the current state-of-the-art models.
arXiv Detail & Related papers (2024-08-29T20:05:02Z) - DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - DetDiffusion: Synergizing Generative and Perceptive Models for Enhanced Data Generation and Perception [78.26734070960886]
Current perceptive models heavily depend on resource-intensive datasets.
We introduce perception-aware loss (P.A. loss) through segmentation, improving both quality and controllability.
Our method customizes data augmentation by extracting and utilizing perception-aware attribute (P.A. Attr) during generation.
arXiv Detail & Related papers (2024-03-20T04:58:03Z) - Diffusion Model Based Visual Compensation Guidance and Visual Difference
Analysis for No-Reference Image Quality Assessment [82.13830107682232]
We propose a novel class of state-of-the-art (SOTA) generative model, which exhibits the capability to model intricate relationships.
We devise a new diffusion restoration network that leverages the produced enhanced image and noise-containing images.
Two visual evaluation branches are designed to comprehensively analyze the obtained high-level feature information.
arXiv Detail & Related papers (2024-02-22T09:39:46Z) - MSTRIQ: No Reference Image Quality Assessment Based on Swin Transformer
with Multi-Stage Fusion [8.338999282303755]
We propose a novel algorithm based on the Swin Transformer.
It aggregates information from both local and global features to better predict the quality.
It ranks 2nd in the no-reference track of NTIRE 2022 Perceptual Image Quality Assessment Challenge.
arXiv Detail & Related papers (2022-05-20T11:34:35Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Continual Learning for Blind Image Quality Assessment [80.55119990128419]
Blind image quality assessment (BIQA) models fail to continually adapt to subpopulation shift.
Recent work suggests training BIQA methods on the combination of all available human-rated IQA datasets.
We formulate continual learning for BIQA, where a model learns continually from a stream of IQA datasets.
arXiv Detail & Related papers (2021-02-19T03:07:01Z) - No-Reference Image Quality Assessment via Feature Fusion and Multi-Task
Learning [29.19484863898778]
Blind or no-reference image quality assessment (NR-IQA) is a fundamental, unsolved, and yet challenging problem.
We propose a simple and yet effective general-purpose no-reference (NR) image quality assessment framework based on multi-task learning.
Our model employs distortion types as well as subjective human scores to predict image quality.
arXiv Detail & Related papers (2020-06-06T05:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.