JNDMix: JND-Based Data Augmentation for No-reference Image Quality
Assessment
- URL: http://arxiv.org/abs/2302.09838v1
- Date: Mon, 20 Feb 2023 08:55:00 GMT
- Title: JNDMix: JND-Based Data Augmentation for No-reference Image Quality
Assessment
- Authors: Jiamu Sheng, Jiayuan Fan, Peng Ye, Jianjian Cao
- Abstract summary: We propose effective and general data augmentation based on just noticeable difference (JND) noise mixing for NR-IQA task.
In detail, we randomly inject the JND noise, imperceptible to the human visual system (HVS), into the training image without any adjustment to its label.
Extensive experiments demonstrate that JNDMix significantly improves the performance and data efficiency of various state-of-the-art NR-IQA models.
- Score: 5.0789200970424035
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite substantial progress in no-reference image quality assessment
(NR-IQA), previous training models often suffer from over-fitting due to the
limited scale of used datasets, resulting in model performance bottlenecks. To
tackle this challenge, we explore the potential of leveraging data augmentation
to improve data efficiency and enhance model robustness. However, most existing
data augmentation methods incur a serious issue, namely that it alters the
image quality and leads to training images mismatching with their original
labels. Additionally, although only a few data augmentation methods are
available for NR-IQA task, their ability to enrich dataset diversity is still
insufficient. To address these issues, we propose a effective and general data
augmentation based on just noticeable difference (JND) noise mixing for NR-IQA
task, named JNDMix. In detail, we randomly inject the JND noise, imperceptible
to the human visual system (HVS), into the training image without any
adjustment to its label. Extensive experiments demonstrate that JNDMix
significantly improves the performance and data efficiency of various
state-of-the-art NR-IQA models and the commonly used baseline models, as well
as the generalization ability. More importantly, JNDMix facilitates MANIQA to
achieve the state-of-the-art performance on LIVEC and KonIQ-10k.
Related papers
- DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
We propose a novel IQA method called diffusion priors-based IQA (DP-IQA)
We use pre-trained stable diffusion as the backbone, extract multi-level features from the denoising U-Net, and decode them to estimate the image quality score.
We distill the knowledge in the above model into a CNN-based student model, significantly reducing the parameter to enhance applicability.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Opinion-Unaware Blind Image Quality Assessment using Multi-Scale Deep Feature Statistics [54.08757792080732]
We propose integrating deep features from pre-trained visual models with a statistical analysis model to achieve opinion-unaware BIQA (OU-BIQA)
Our proposed model exhibits superior consistency with human visual perception compared to state-of-the-art BIQA models.
arXiv Detail & Related papers (2024-05-29T06:09:34Z) - Diffusion Model Based Visual Compensation Guidance and Visual Difference
Analysis for No-Reference Image Quality Assessment [82.13830107682232]
We propose a novel class of state-of-the-art (SOTA) generative model, which exhibits the capability to model intricate relationships.
We devise a new diffusion restoration network that leverages the produced enhanced image and noise-containing images.
Two visual evaluation branches are designed to comprehensively analyze the obtained high-level feature information.
arXiv Detail & Related papers (2024-02-22T09:39:46Z) - Steerable Conditional Diffusion for Out-of-Distribution Adaptation in
Imaging Inverse Problems [78.76955228709241]
We introduce a novel sampling framework called Steerable Conditional Diffusion.
This framework adapts the denoising network specifically to the available measured data.
We achieve substantial enhancements in OOD performance across diverse imaging modalities.
arXiv Detail & Related papers (2023-08-28T08:47:06Z) - Towards Bridging the Performance Gaps of Joint Energy-based Models [1.933681537640272]
Joint Energy-based Model (JEM) achieves high classification accuracy and image generation quality simultaneously.
We introduce a variety of training techniques to bridge the accuracy gap and the generation quality gap of JEM.
Our SADA-JEM achieves state-of-the-art performances and outperforms JEM in image classification, image generation, calibration, out-of-distribution detection and adversarial robustness by a notable margin.
arXiv Detail & Related papers (2022-09-16T14:19:48Z) - MSTRIQ: No Reference Image Quality Assessment Based on Swin Transformer
with Multi-Stage Fusion [8.338999282303755]
We propose a novel algorithm based on the Swin Transformer.
It aggregates information from both local and global features to better predict the quality.
It ranks 2nd in the no-reference track of NTIRE 2022 Perceptual Image Quality Assessment Challenge.
arXiv Detail & Related papers (2022-05-20T11:34:35Z) - Learning Transformer Features for Image Quality Assessment [53.51379676690971]
We propose a unified IQA framework that utilizes CNN backbone and transformer encoder to extract features.
The proposed framework is compatible with both FR and NR modes and allows for a joint training scheme.
arXiv Detail & Related papers (2021-12-01T13:23:00Z) - Continual Learning for Blind Image Quality Assessment [80.55119990128419]
Blind image quality assessment (BIQA) models fail to continually adapt to subpopulation shift.
Recent work suggests training BIQA methods on the combination of all available human-rated IQA datasets.
We formulate continual learning for BIQA, where a model learns continually from a stream of IQA datasets.
arXiv Detail & Related papers (2021-02-19T03:07:01Z) - No-Reference Image Quality Assessment via Feature Fusion and Multi-Task
Learning [29.19484863898778]
Blind or no-reference image quality assessment (NR-IQA) is a fundamental, unsolved, and yet challenging problem.
We propose a simple and yet effective general-purpose no-reference (NR) image quality assessment framework based on multi-task learning.
Our model employs distortion types as well as subjective human scores to predict image quality.
arXiv Detail & Related papers (2020-06-06T05:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.