Perceptual Attacks of No-Reference Image Quality Models with
Human-in-the-Loop
- URL: http://arxiv.org/abs/2210.00933v1
- Date: Mon, 3 Oct 2022 13:47:16 GMT
- Title: Perceptual Attacks of No-Reference Image Quality Models with
Human-in-the-Loop
- Authors: Weixia Zhang and Dingquan Li and Xiongkuo Min and Guangtao Zhai and
Guodong Guo and Xiaokang Yang and Kede Ma
- Abstract summary: We make one of the first attempts to examine the perceptual robustness of NR-IQA models.
We test one knowledge-driven and three data-driven NR-IQA methods under four full-reference IQA models.
We find that all four NR-IQA models are vulnerable to the proposed perceptual attack.
- Score: 113.75573175709573
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: No-reference image quality assessment (NR-IQA) aims to quantify how humans
perceive visual distortions of digital images without access to their
undistorted references. NR-IQA models are extensively studied in computational
vision, and are widely used for performance evaluation and perceptual
optimization of man-made vision systems. Here we make one of the first attempts
to examine the perceptual robustness of NR-IQA models. Under a Lagrangian
formulation, we identify insightful connections of the proposed perceptual
attack to previous beautiful ideas in computer vision and machine learning. We
test one knowledge-driven and three data-driven NR-IQA methods under four
full-reference IQA models (as approximations to human perception of
just-noticeable differences). Through carefully designed psychophysical
experiments, we find that all four NR-IQA models are vulnerable to the proposed
perceptual attack. More interestingly, we observe that the generated
counterexamples are not transferable, manifesting themselves as distinct design
flows of respective NR-IQA methods.
Related papers
- Sliced Maximal Information Coefficient: A Training-Free Approach for Image Quality Assessment Enhancement [12.628718661568048]
We aim to explore a generalized human visual attention estimation strategy to mimic the process of human quality rating.
In particular, we model human attention generation by measuring the statistical dependency between the degraded image and the reference image.
Experimental results verify the performance of existing IQA models can be consistently improved when our attention module is incorporated.
arXiv Detail & Related papers (2024-08-19T11:55:32Z) - Understanding and Evaluating Human Preferences for AI Generated Images with Instruction Tuning [58.41087653543607]
We first establish a novel Image Quality Assessment (IQA) database for AIGIs, termed AIGCIQA2023+.
This paper presents a MINT-IQA model to evaluate and explain human preferences for AIGIs from Multi-perspectives with INstruction Tuning.
arXiv Detail & Related papers (2024-05-12T17:45:11Z) - Comparison of No-Reference Image Quality Models via MAP Estimation in
Diffusion Latents [99.19391983670569]
We show that NR-IQA models can be plugged into the maximum a posteriori (MAP) estimation framework for image enhancement.
Different NR-IQA models are likely to induce different enhanced images, which are ultimately subject to psychophysical testing.
This leads to a new computational method for comparing NR-IQA models within the analysis-by-synthesis framework.
arXiv Detail & Related papers (2024-03-11T03:35:41Z) - Black-box Adversarial Attacks Against Image Quality Assessment Models [16.11900427447442]
The goal of No-Reference Image Quality Assessment (NR-IQA) is to predict the perceptual quality of an image in line with its subjective evaluation.
This paper makes the first attempt to explore the black-box adversarial attacks on NR-IQA models.
arXiv Detail & Related papers (2024-02-27T14:16:39Z) - Exploring Vulnerabilities of No-Reference Image Quality Assessment Models: A Query-Based Black-Box Method [15.266845355276317]
No-Reference Image Quality Assessment aims to predict image quality scores consistent with human perception.
Current attack methods of NR-IQA heavily rely on the gradient of the NR-IQA model.
We propose a pioneering query-based black box attack against NR-IQA methods.
arXiv Detail & Related papers (2024-01-10T15:30:19Z) - Enhancing image quality prediction with self-supervised visual masking [20.190853812320395]
Full-reference image quality metrics (FR-IQMs) aim to measure the visual differences between a pair of reference and distorted images.
We propose to predict a visual masking model that modulates reference and distorted images in a way that penalizes the visual errors based on their visibility.
Our approach results in enhanced FR-IQM metrics that are more in line with human prediction both visually and quantitatively.
arXiv Detail & Related papers (2023-05-31T13:48:51Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z) - MetaIQA: Deep Meta-learning for No-Reference Image Quality Assessment [73.55944459902041]
This paper presents a no-reference IQA metric based on deep meta-learning.
We first collect a number of NR-IQA tasks for different distortions.
Then meta-learning is adopted to learn the prior knowledge shared by diversified distortions.
Extensive experiments demonstrate that the proposed metric outperforms the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2020-04-11T23:36:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.