Comparing the Robustness of Modern No-Reference Image- and Video-Quality
Metrics to Adversarial Attacks
- URL: http://arxiv.org/abs/2310.06958v4
- Date: Tue, 27 Feb 2024 08:34:43 GMT
- Title: Comparing the Robustness of Modern No-Reference Image- and Video-Quality
Metrics to Adversarial Attacks
- Authors: Anastasia Antsiferova, Khaled Abud, Aleksandr Gushchin, Ekaterina
Shumitskaya, Sergey Lavrushkin, Dmitriy Vatolin
- Abstract summary: This paper analyses modern metrics' robustness to different adversarial attacks.
Some metrics showed high resistance to adversarial attacks, which makes their usage in benchmarks safer than vulnerable metrics.
- Score: 43.85564498709518
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, neural-network-based image- and video-quality metrics perform
better than traditional methods. However, they also became more vulnerable to
adversarial attacks that increase metrics' scores without improving visual
quality. The existing benchmarks of quality metrics compare their performance
in terms of correlation with subjective quality and calculation time.
Nonetheless, the adversarial robustness of image-quality metrics is also an
area worth researching. This paper analyses modern metrics' robustness to
different adversarial attacks. We adapted adversarial attacks from computer
vision tasks and compared attacks' efficiency against 15 no-reference image-
and video-quality metrics. Some metrics showed high resistance to adversarial
attacks, which makes their usage in benchmarks safer than vulnerable metrics.
The benchmark accepts submissions of new metrics for researchers who want to
make their metrics more robust to attacks or to find such metrics for their
needs. The latest results can be found online:
https://videoprocessing.ai/benchmarks/metrics-robustness.html.
Related papers
- Guardians of Image Quality: Benchmarking Defenses Against Adversarial Attacks on Image Quality Metrics [35.87448891459325]
This paper presents a comprehensive benchmarking study of various defense mechanisms in response to the rise in adversarial attacks on IQA.
We evaluate 25 defense strategies, including adversarial purification, adversarial training, and certified robustness methods.
We analyze the differences between defenses and their applicability to IQA tasks, considering that they should preserve IQA scores and image quality.
arXiv Detail & Related papers (2024-08-02T19:02:49Z) - Ti-Patch: Tiled Physical Adversarial Patch for no-reference video quality metrics [3.7855740990304736]
No-reference image- and video-quality metrics are crucial in many computer vision tasks.
The vulnerability of quality metrics imposes restrictions on using such metrics in quality control systems.
This paper proposes a new method for testing quality metrics vulnerability in the physical space.
arXiv Detail & Related papers (2024-04-15T17:38:47Z) - IOI: Invisible One-Iteration Adversarial Attack on No-Reference Image- and Video-Quality Metrics [4.135467749401761]
No-reference image- and video-quality metrics are widely used in video processing benchmarks.
This paper introduces an Invisible One-Iteration (IOI) adversarial attack on no reference image and video quality metrics.
arXiv Detail & Related papers (2024-03-09T16:33:30Z) - A Geometrical Approach to Evaluate the Adversarial Robustness of Deep
Neural Networks [52.09243852066406]
Adversarial Converging Time Score (ACTS) measures the converging time as an adversarial robustness metric.
We validate the effectiveness and generalization of the proposed ACTS metric against different adversarial attacks on the large-scale ImageNet dataset.
arXiv Detail & Related papers (2023-10-10T09:39:38Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - Fast Adversarial CNN-based Perturbation Attack on No-Reference Image-
and Video-Quality Metrics [0.0]
We propose a fast adversarial attack on no-reference quality metrics.
The proposed attack can be exploited as a preprocessing step in real-time video processing and compression algorithms.
This research can yield insights to further aid in designing of stable neural-network-based no-reference quality metrics.
arXiv Detail & Related papers (2023-05-24T20:18:21Z) - Video compression dataset and benchmark of learning-based video-quality
metrics [55.41644538483948]
We present a new benchmark for video-quality metrics that evaluates video compression.
It is based on a new dataset consisting of about 2,500 streams encoded using different standards.
Subjective scores were collected using crowdsourced pairwise comparisons.
arXiv Detail & Related papers (2022-11-22T09:22:28Z) - Universal Perturbation Attack on Differentiable No-Reference Image- and
Video-Quality Metrics [0.0]
Some attacks can deceive image- and video-quality metrics.
We propose a new method to attack differentiable no-reference quality metrics through universal perturbation.
arXiv Detail & Related papers (2022-11-01T10:28:13Z) - The Glass Ceiling of Automatic Evaluation in Natural Language Generation [60.59732704936083]
We take a step back and analyze recent progress by comparing the body of existing automatic metrics and human metrics.
Our extensive statistical analysis reveals surprising findings: automatic metrics -- old and new -- are much more similar to each other than to humans.
arXiv Detail & Related papers (2022-08-31T01:13:46Z) - Bidimensional Leaderboards: Generate and Evaluate Language Hand in Hand [117.62186420147563]
We propose a generalization of leaderboards, bidimensional leaderboards (Billboards)
Unlike conventional unidimensional leaderboards that sort submitted systems by predetermined metrics, a Billboard accepts both generators and evaluation metrics as competing entries.
We demonstrate that a linear ensemble of a few diverse metrics sometimes substantially outperforms existing metrics in isolation.
arXiv Detail & Related papers (2021-12-08T06:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.