Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image Quality Metrics
- URL: http://arxiv.org/abs/2408.01541v1
- Date: Fri, 2 Aug 2024 19:02:49 GMT
- Title: Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image Quality Metrics
- Authors: Alexander Gushchin, Khaled Abud, Georgii Bychkov, Ekaterina Shumitskaya, Anna Chistyakova, Sergey Lavrushkin, Bader Rasheed, Kirill Malyshev, Dmitriy Vatolin, Anastasia Antsiferova,
- Abstract summary: This paper presents a comprehensive benchmarking study of various defense mechanisms in response to the rise in adversarial attacks on IQA.
We evaluate 25 defense strategies, including adversarial purification, adversarial training, and certified robustness methods.
We analyze the differences between defenses and their applicability to IQA tasks, considering that they should preserve IQA scores and image quality.
- Score: 35.87448891459325
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the field of Image Quality Assessment (IQA), the adversarial robustness of the metrics poses a critical concern. This paper presents a comprehensive benchmarking study of various defense mechanisms in response to the rise in adversarial attacks on IQA. We systematically evaluate 25 defense strategies, including adversarial purification, adversarial training, and certified robustness methods. We applied 14 adversarial attack algorithms of various types in both non-adaptive and adaptive settings and tested these defenses against them. We analyze the differences between defenses and their applicability to IQA tasks, considering that they should preserve IQA scores and image quality. The proposed benchmark aims to guide future developments and accepts submissions of new methods, with the latest results available online: https://videoprocessing.ai/benchmarks/iqa-defenses.html.
Related papers
- MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - Adversarial purification for no-reference image-quality metrics: applicability study and new methods [41.95502426953125]
In this paper, we apply several widespread attacks on IQA models and examine the success of the defences against them.
The purification methodologies covered different preprocessing techniques, including geometrical transformations, compression, denoising, and modern neural network-based methods.
arXiv Detail & Related papers (2024-04-10T12:17:25Z) - Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization [18.95463890154886]
No-Reference Image Quality Assessment (NR-IQA) models play a crucial role in the media industry.
These models are found to be vulnerable to adversarial attacks, which introduce imperceptible perturbations to input images.
We propose a defense method to improve the stability in predicted scores when attacked by small perturbations.
arXiv Detail & Related papers (2024-03-18T01:11:53Z) - Comparing the Robustness of Modern No-Reference Image- and Video-Quality
Metrics to Adversarial Attacks [43.85564498709518]
This paper analyses modern metrics' robustness to different adversarial attacks.
Some metrics showed high resistance to adversarial attacks, which makes their usage in benchmarks safer than vulnerable metrics.
arXiv Detail & Related papers (2023-10-10T19:21:41Z) - Measuring Equality in Machine Learning Security Defenses: A Case Study
in Speech Recognition [56.69875958980474]
This work considers approaches to defending learned systems and how security defenses result in performance inequities across different sub-populations.
We find that many methods that have been proposed can cause direct harm, like false rejection and unequal benefits from robustness training.
We present a comparison of equality between two rejection-based defenses: randomized smoothing and neural rejection, finding randomized smoothing more equitable due to the sampling mechanism for minority groups.
arXiv Detail & Related papers (2023-02-17T16:19:26Z) - Recent improvements of ASR models in the face of adversarial attacks [28.934863462633636]
Speech Recognition models are vulnerable to adversarial attacks.
We show that the relative strengths of different attack algorithms vary considerably when changing the model architecture.
We release our source code as a package that should help future research in evaluating their attacks and defenses.
arXiv Detail & Related papers (2022-03-29T22:40:37Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.