Backdoor Attacks against No-Reference Image Quality Assessment Models via a Scalable Trigger
- URL: http://arxiv.org/abs/2412.07277v2
- Date: Fri, 10 Jan 2025 12:17:00 GMT
- Title: Backdoor Attacks against No-Reference Image Quality Assessment Models via a Scalable Trigger
- Authors: Yi Yu, Song Xia, Xun Lin, Wenhan Yang, Shijian Lu, Yap-peng Tan, Alex Kot,
- Abstract summary: No-Reference Image Quality Assessment (NR-IQA) plays a critical role in evaluating and optimizing computer vision systems.
Recent research indicates that NR-IQA models are susceptible to adversarial attacks.
We present a novel poisoning-based backdoor attack against NR-IQA (BAIQA)
- Score: 76.36315347198195
- License:
- Abstract: No-Reference Image Quality Assessment (NR-IQA), responsible for assessing the quality of a single input image without using any reference, plays a critical role in evaluating and optimizing computer vision systems, e.g., low-light enhancement. Recent research indicates that NR-IQA models are susceptible to adversarial attacks, which can significantly alter predicted scores with visually imperceptible perturbations. Despite revealing vulnerabilities, these attack methods have limitations, including high computational demands, untargeted manipulation, limited practical utility in white-box scenarios, and reduced effectiveness in black-box scenarios. To address these challenges, we shift our focus to another significant threat and present a novel poisoning-based backdoor attack against NR-IQA (BAIQA), allowing the attacker to manipulate the IQA model's output to any desired target value by simply adjusting a scaling coefficient $\alpha$ for the trigger. We propose to inject the trigger in the discrete cosine transform (DCT) domain to improve the local invariance of the trigger for countering trigger diminishment in NR-IQA models due to widely adopted data augmentations. Furthermore, the universal adversarial perturbations (UAP) in the DCT space are designed as the trigger, to increase IQA model susceptibility to manipulation and improve attack effectiveness. In addition to the heuristic method for poison-label BAIQA (P-BAIQA), we explore the design of clean-label BAIQA (C-BAIQA), focusing on $\alpha$ sampling and image data refinement, driven by theoretical insights we reveal. Extensive experiments on diverse datasets and various NR-IQA models demonstrate the effectiveness of our attacks. Code can be found at https://github.com/yuyi-sd/BAIQA.
Related papers
- Cross-Modal Transferable Image-to-Video Attack on Video Quality Metrics [3.7855740990304736]
Modern image and video quality assessment (IQA/VQA) metrics are vulnerable to adversarial attacks.
Most of the attacks studied in the literature are white-box attacks, while black-box attacks in the context of VQA have received less attention.
We propose a cross-modal attack method, IC2VQA, aimed at exploring the vulnerabilities of modern VQA models.
arXiv Detail & Related papers (2025-01-14T20:12:09Z) - DP-IQA: Utilizing Diffusion Prior for Blind Image Quality Assessment in the Wild [54.139923409101044]
Blind image quality assessment (IQA) in the wild presents significant challenges.
Given the difficulty in collecting large-scale training data, leveraging limited data to develop a model with strong generalization remains an open problem.
Motivated by the robust image perception capabilities of pre-trained text-to-image (T2I) diffusion models, we propose a novel IQA method, diffusion priors-based IQA.
arXiv Detail & Related papers (2024-05-30T12:32:35Z) - Causal Perception Inspired Representation Learning for Trustworthy Image Quality Assessment [2.290956583394892]
We propose to build a trustworthy IQA model via Causal Perception inspired Representation Learning (CPRL)
CPRL serves as the causation of the subjective quality label, which is invariant to the imperceptible adversarial perturbations.
Experiments on four benchmark databases show that the proposed CPRL method outperforms many state-of-the-art adversarial defense methods.
arXiv Detail & Related papers (2024-04-30T13:55:30Z) - Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization [18.95463890154886]
No-Reference Image Quality Assessment (NR-IQA) models play a crucial role in the media industry.
These models are found to be vulnerable to adversarial attacks, which introduce imperceptible perturbations to input images.
We propose a defense method to improve the stability in predicted scores when attacked by small perturbations.
arXiv Detail & Related papers (2024-03-18T01:11:53Z) - When No-Reference Image Quality Models Meet MAP Estimation in Diffusion Latents [92.45867913876691]
No-reference image quality assessment (NR-IQA) models can effectively quantify perceived image quality.
We show that NR-IQA models can be plugged into the maximum a posteriori (MAP) estimation framework for image enhancement.
arXiv Detail & Related papers (2024-03-11T03:35:41Z) - Black-box Adversarial Attacks Against Image Quality Assessment Models [16.11900427447442]
The goal of No-Reference Image Quality Assessment (NR-IQA) is to predict the perceptual quality of an image in line with its subjective evaluation.
This paper makes the first attempt to explore the black-box adversarial attacks on NR-IQA models.
arXiv Detail & Related papers (2024-02-27T14:16:39Z) - Exploring Vulnerabilities of No-Reference Image Quality Assessment Models: A Query-Based Black-Box Method [15.266845355276317]
No-Reference Image Quality Assessment aims to predict image quality scores consistent with human perception.
Current attack methods of NR-IQA heavily rely on the gradient of the NR-IQA model.
We propose a pioneering query-based black box attack against NR-IQA methods.
arXiv Detail & Related papers (2024-01-10T15:30:19Z) - Uncertainty-Aware Blind Image Quality Assessment in the Laboratory and
Wild [98.48284827503409]
We develop a textitunified BIQA model and an approach of training it for both synthetic and realistic distortions.
We employ the fidelity loss to optimize a deep neural network for BIQA over a large number of such image pairs.
Experiments on six IQA databases show the promise of the learned method in blindly assessing image quality in the laboratory and wild.
arXiv Detail & Related papers (2020-05-28T13:35:23Z) - MetaIQA: Deep Meta-learning for No-Reference Image Quality Assessment [73.55944459902041]
This paper presents a no-reference IQA metric based on deep meta-learning.
We first collect a number of NR-IQA tasks for different distortions.
Then meta-learning is adopted to learn the prior knowledge shared by diversified distortions.
Extensive experiments demonstrate that the proposed metric outperforms the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2020-04-11T23:36:36Z) - Adversarial Attack on Deep Product Quantization Network for Image
Retrieval [74.85736968193879]
Deep product quantization network (DPQN) has recently received much attention in fast image retrieval tasks.
Recent studies show that deep neural networks (DNNs) are vulnerable to input with small and maliciously designed perturbations.
We propose product quantization adversarial generation (PQ-AG) to generate adversarial examples for product quantization based retrieval systems.
arXiv Detail & Related papers (2020-02-26T09:25:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.