Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization
- URL: http://arxiv.org/abs/2403.11397v1
- Date: Mon, 18 Mar 2024 01:11:53 GMT
- Title: Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization
- Authors: Yujia Liu, Chenxi Yang, Dingquan Li, Jianhao Ding, Tingting Jiang,
- Abstract summary: No-Reference Image Quality Assessment (NR-IQA) models play a crucial role in the media industry.
These models are found to be vulnerable to adversarial attacks, which introduce imperceptible perturbations to input images.
We propose a defense method to improve the stability in predicted scores when attacked by small perturbations.
- Score: 18.95463890154886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The task of No-Reference Image Quality Assessment (NR-IQA) is to estimate the quality score of an input image without additional information. NR-IQA models play a crucial role in the media industry, aiding in performance evaluation and optimization guidance. However, these models are found to be vulnerable to adversarial attacks, which introduce imperceptible perturbations to input images, resulting in significant changes in predicted scores. In this paper, we propose a defense method to improve the stability in predicted scores when attacked by small perturbations, thus enhancing the adversarial robustness of NR-IQA models. To be specific, we present theoretical evidence showing that the magnitude of score changes is related to the $\ell_1$ norm of the model's gradient with respect to the input image. Building upon this theoretical foundation, we propose a norm regularization training strategy aimed at reducing the $\ell_1$ norm of the gradient, thereby boosting the robustness of NR-IQA models. Experiments conducted on four NR-IQA baseline models demonstrate the effectiveness of our strategy in reducing score changes in the presence of adversarial attacks. To the best of our knowledge, this work marks the first attempt to defend against adversarial attacks on NR-IQA models. Our study offers valuable insights into the adversarial robustness of NR-IQA models and provides a foundation for future research in this area.
Related papers
- DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Causal Perception Inspired Representation Learning for Trustworthy Image Quality Assessment [2.290956583394892]
We propose to build a trustworthy IQA model via Causal Perception inspired Representation Learning (CPRL)
CPRL serves as the causation of the subjective quality label, which is invariant to the imperceptible adversarial perturbations.
Experiments on four benchmark databases show that the proposed CPRL method outperforms many state-of-the-art adversarial defense methods.
arXiv Detail & Related papers (2024-04-30T13:55:30Z) - Beyond Score Changes: Adversarial Attack on No-Reference Image Quality Assessment from Two Perspectives [15.575900555433863]
We introduce a new framework of correlation-error-based attacks that perturb both the correlation within an image set and score changes on individual images.
Our research focuses on ranking-related correlation metrics like Spearman's Rank-Order Correlation Coefficient (SROCC) and prediction error-related metrics like Mean Squared Error (MSE)
arXiv Detail & Related papers (2024-04-20T05:24:06Z) - Comparison of No-Reference Image Quality Models via MAP Estimation in
Diffusion Latents [99.19391983670569]
We show that NR-IQA models can be plugged into the maximum a posteriori (MAP) estimation framework for image enhancement.
Different NR-IQA models are likely to induce different enhanced images, which are ultimately subject to psychophysical testing.
This leads to a new computational method for comparing NR-IQA models within the analysis-by-synthesis framework.
arXiv Detail & Related papers (2024-03-11T03:35:41Z) - Black-box Adversarial Attacks Against Image Quality Assessment Models [16.11900427447442]
The goal of No-Reference Image Quality Assessment (NR-IQA) is to predict the perceptual quality of an image in line with its subjective evaluation.
This paper makes the first attempt to explore the black-box adversarial attacks on NR-IQA models.
arXiv Detail & Related papers (2024-02-27T14:16:39Z) - Exploring Vulnerabilities of No-Reference Image Quality Assessment Models: A Query-Based Black-Box Method [15.266845355276317]
No-Reference Image Quality Assessment aims to predict image quality scores consistent with human perception.
Current attack methods of NR-IQA heavily rely on the gradient of the NR-IQA model.
We propose a pioneering query-based black box attack against NR-IQA methods.
arXiv Detail & Related papers (2024-01-10T15:30:19Z) - Perceptual Attacks of No-Reference Image Quality Models with
Human-in-the-Loop [113.75573175709573]
We make one of the first attempts to examine the perceptual robustness of NR-IQA models.
We test one knowledge-driven and three data-driven NR-IQA methods under four full-reference IQA models.
We find that all four NR-IQA models are vulnerable to the proposed perceptual attack.
arXiv Detail & Related papers (2022-10-03T13:47:16Z) - Policy Smoothing for Provably Robust Reinforcement Learning [109.90239627115336]
We study the provable robustness of reinforcement learning against norm-bounded adversarial perturbations of the inputs.
We generate certificates that guarantee that the total reward obtained by the smoothed policy will not fall below a certain threshold under a norm-bounded adversarial of perturbation the input.
arXiv Detail & Related papers (2021-06-21T21:42:08Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z) - MetaIQA: Deep Meta-learning for No-Reference Image Quality Assessment [73.55944459902041]
This paper presents a no-reference IQA metric based on deep meta-learning.
We first collect a number of NR-IQA tasks for different distortions.
Then meta-learning is adopted to learn the prior knowledge shared by diversified distortions.
Extensive experiments demonstrate that the proposed metric outperforms the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2020-04-11T23:36:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.