Black-box Adversarial Attacks Against Image Quality Assessment Models
- URL: http://arxiv.org/abs/2402.17533v2
- Date: Wed, 28 Feb 2024 13:44:48 GMT
- Title: Black-box Adversarial Attacks Against Image Quality Assessment Models
- Authors: Yu Ran, Ao-Xiang Zhang, Mingjie Li, Weixuan Tang, Yuan-Gen Wang
- Abstract summary: The goal of No-Reference Image Quality Assessment (NR-IQA) is to predict the perceptual quality of an image in line with its subjective evaluation.
This paper makes the first attempt to explore the black-box adversarial attacks on NR-IQA models.
- Score: 16.11900427447442
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of No-Reference Image Quality Assessment (NR-IQA) is to predict the
perceptual quality of an image in line with its subjective evaluation. To put
the NR-IQA models into practice, it is essential to study their potential
loopholes for model refinement. This paper makes the first attempt to explore
the black-box adversarial attacks on NR-IQA models. Specifically, we first
formulate the attack problem as maximizing the deviation between the estimated
quality scores of original and perturbed images, while restricting the
perturbed image distortions for visual quality preservation. Under such
formulation, we then design a Bi-directional loss function to mislead the
estimated quality scores of adversarial examples towards an opposite direction
with maximum deviation. On this basis, we finally develop an efficient and
effective black-box attack method against NR-IQA models. Extensive experiments
reveal that all the evaluated NR-IQA models are vulnerable to the proposed
attack method. And the generated perturbations are not transferable, enabling
them to serve the investigation of specialities of disparate IQA models.
Related papers
- Backdoor Attacks against No-Reference Image Quality Assessment Models via a Scalable Trigger [76.36315347198195]
No-Reference Image Quality Assessment (NR-IQA) plays a critical role in evaluating and optimizing computer vision systems.
Recent research indicates that NR-IQA models are susceptible to adversarial attacks.
We present a novel poisoning-based backdoor attack against NR-IQA (BAIQA)
arXiv Detail & Related papers (2024-12-10T08:07:19Z) - DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization [18.95463890154886]
No-Reference Image Quality Assessment (NR-IQA) models play a crucial role in the media industry.
These models are found to be vulnerable to adversarial attacks, which introduce imperceptible perturbations to input images.
We propose a defense method to improve the stability in predicted scores when attacked by small perturbations.
arXiv Detail & Related papers (2024-03-18T01:11:53Z) - When No-Reference Image Quality Models Meet MAP Estimation in Diffusion Latents [92.45867913876691]
No-reference image quality assessment (NR-IQA) models can effectively quantify perceived image quality.
We show that NR-IQA models can be plugged into the maximum a posteriori (MAP) estimation framework for image enhancement.
arXiv Detail & Related papers (2024-03-11T03:35:41Z) - Exploring Vulnerabilities of No-Reference Image Quality Assessment Models: A Query-Based Black-Box Method [15.266845355276317]
No-Reference Image Quality Assessment aims to predict image quality scores consistent with human perception.
Current attack methods of NR-IQA heavily rely on the gradient of the NR-IQA model.
We propose a pioneering query-based black box attack against NR-IQA methods.
arXiv Detail & Related papers (2024-01-10T15:30:19Z) - Perceptual Attacks of No-Reference Image Quality Models with
Human-in-the-Loop [113.75573175709573]
We make one of the first attempts to examine the perceptual robustness of NR-IQA models.
We test one knowledge-driven and three data-driven NR-IQA methods under four full-reference IQA models.
We find that all four NR-IQA models are vulnerable to the proposed perceptual attack.
arXiv Detail & Related papers (2022-10-03T13:47:16Z) - Conformer and Blind Noisy Students for Improved Image Quality Assessment [80.57006406834466]
Learning-based approaches for perceptual image quality assessment (IQA) usually require both the distorted and reference image for measuring the perceptual quality accurately.
In this work, we explore the performance of transformer-based full-reference IQA models.
We also propose a method for IQA based on semi-supervised knowledge distillation from full-reference teacher models into blind student models.
arXiv Detail & Related papers (2022-04-27T10:21:08Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.