Universal Perturbation Attack on Differentiable No-Reference Image- and
Video-Quality Metrics
- URL: http://arxiv.org/abs/2211.00366v1
- Date: Tue, 1 Nov 2022 10:28:13 GMT
- Title: Universal Perturbation Attack on Differentiable No-Reference Image- and
Video-Quality Metrics
- Authors: Ekaterina Shumitskaya, Anastasia Antsiferova, Dmitriy Vatolin
- Abstract summary: Some attacks can deceive image- and video-quality metrics.
We propose a new method to attack differentiable no-reference quality metrics through universal perturbation.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Universal adversarial perturbation attacks are widely used to analyze image
classifiers that employ convolutional neural networks. Nowadays, some attacks
can deceive image- and video-quality metrics. So sustainability analysis of
these metrics is important. Indeed, if an attack can confuse the metric, an
attacker can easily increase quality scores. When developers of image- and
video-algorithms can boost their scores through detached processing, algorithm
comparisons are no longer fair. Inspired by the idea of universal adversarial
perturbation for classifiers, we suggest a new method to attack differentiable
no-reference quality metrics through universal perturbation. We applied this
method to seven no-reference image- and video-quality metrics (PaQ-2-PiQ,
Linearity, VSFA, MDTVSFA, KonCept512, Nima and SPAQ). For each one, we trained
a universal perturbation that increases the respective scores. We also propose
a method for assessing metric stability and identify the metrics that are the
most vulnerable and the most resistant to our attack. The existence of
successful universal perturbations appears to diminish the metric's ability to
provide reliable scores. We therefore recommend our proposed method as an
additional verification of metric reliability to complement traditional
subjective tests and benchmarks.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - IOI: Invisible One-Iteration Adversarial Attack on No-Reference Image- and Video-Quality Metrics [4.135467749401761]
No-reference image- and video-quality metrics are widely used in video processing benchmarks.
This paper introduces an Invisible One-Iteration (IOI) adversarial attack on no reference image and video quality metrics.
arXiv Detail & Related papers (2024-03-09T16:33:30Z) - Cobra Effect in Reference-Free Image Captioning Metrics [58.438648377314436]
A proliferation of reference-free methods, leveraging visual-language pre-trained models (VLMs), has emerged.
In this paper, we study if there are any deficiencies in reference-free metrics.
We employ GPT-4V as an evaluative tool to assess generated sentences and the result reveals that our approach achieves state-of-the-art (SOTA) performance.
arXiv Detail & Related papers (2024-02-18T12:36:23Z) - Comparing the Robustness of Modern No-Reference Image- and Video-Quality
Metrics to Adversarial Attacks [43.85564498709518]
This paper analyses modern metrics' robustness to different adversarial attacks.
Some metrics showed high resistance to adversarial attacks, which makes their usage in benchmarks safer than vulnerable metrics.
arXiv Detail & Related papers (2023-10-10T19:21:41Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Fast Adversarial CNN-based Perturbation Attack on No-Reference Image-
and Video-Quality Metrics [0.0]
We propose a fast adversarial attack on no-reference quality metrics.
The proposed attack can be exploited as a preprocessing step in real-time video processing and compression algorithms.
This research can yield insights to further aid in designing of stable neural-network-based no-reference quality metrics.
arXiv Detail & Related papers (2023-05-24T20:18:21Z) - Attacking Perceptual Similarity Metrics [5.326626090397465]
We systematically examine the robustness of similarity metrics to imperceptible adversarial perturbations.
We first show that all metrics in our study are susceptible to perturbations generated via common adversarial attacks.
Next, we attack the widely adopted LPIPS metric using spatial-transformation-based adversarial perturbations.
arXiv Detail & Related papers (2023-05-15T17:55:04Z) - Enhancing the Self-Universality for Transferable Targeted Attacks [88.6081640779354]
Our new attack method is proposed based on the observation that highly universal adversarial perturbations tend to be more transferable for targeted attacks.
Instead of optimizing the perturbations on different images, optimizing on different regions to achieve self-universality can get rid of using extra data.
With the feature similarity loss, our method makes the features from adversarial perturbations to be more dominant than that of benign images.
arXiv Detail & Related papers (2022-09-08T11:21:26Z) - Evaluation of Neural Networks Defenses and Attacks using NDCG and
Reciprocal Rank Metrics [6.6389732792316]
We present two metrics which are specifically designed to measure the effect of attacks, or the recovery effect of defenses, on the output of neural networks in classification tasks.
Inspired by the normalized discounted cumulative gain and the reciprocal rank metrics used in information retrieval literature, we treat the neural network predictions as ranked lists of results.
Compared to the common classification metrics, our proposed metrics demonstrate superior informativeness and distinctiveness.
arXiv Detail & Related papers (2022-01-10T12:54:45Z) - Universal Adversarial Training with Class-Wise Perturbations [78.05383266222285]
adversarial training is the most widely used method for defending against adversarial attacks.
In this work, we find that a UAP does not attack all classes equally.
We improve the SOTA UAT by proposing to utilize class-wise UAPs during adversarial training.
arXiv Detail & Related papers (2021-04-07T09:05:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.