Fast Adversarial CNN-based Perturbation Attack on No-Reference Image-
and Video-Quality Metrics
- URL: http://arxiv.org/abs/2305.15544v1
- Date: Wed, 24 May 2023 20:18:21 GMT
- Title: Fast Adversarial CNN-based Perturbation Attack on No-Reference Image-
and Video-Quality Metrics
- Authors: Ekaterina Shumitskaya, Anastasia Antsiferova, Dmitriy Vatolin
- Abstract summary: We propose a fast adversarial attack on no-reference quality metrics.
The proposed attack can be exploited as a preprocessing step in real-time video processing and compression algorithms.
This research can yield insights to further aid in designing of stable neural-network-based no-reference quality metrics.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern neural-network-based no-reference image- and video-quality metrics
exhibit performance as high as full-reference metrics. These metrics are widely
used to improve visual quality in computer vision methods and compare video
processing methods. However, these metrics are not stable to traditional
adversarial attacks, which can cause incorrect results. Our goal is to
investigate the boundaries of no-reference metrics applicability, and in this
paper, we propose a fast adversarial perturbation attack on no-reference
quality metrics. The proposed attack (FACPA) can be exploited as a
preprocessing step in real-time video processing and compression algorithms.
This research can yield insights to further aid in designing of stable
neural-network-based no-reference quality metrics.
Related papers
- IOI: Invisible One-Iteration Adversarial Attack on No-Reference Image- and Video-Quality Metrics [4.135467749401761]
No-reference image- and video-quality metrics are widely used in video processing benchmarks.
This paper introduces an Invisible One-Iteration (IOI) adversarial attack on no reference image and video quality metrics.
arXiv Detail & Related papers (2024-03-09T16:33:30Z) - Video Dynamics Prior: An Internal Learning Approach for Robust Video
Enhancements [83.5820690348833]
We present a framework for low-level vision tasks that does not require any external training data corpus.
Our approach learns neural modules by optimizing over a corrupted sequence, leveraging the weights of the coherence-temporal test and statistics internal statistics.
arXiv Detail & Related papers (2023-12-13T01:57:11Z) - Comparing the Robustness of Modern No-Reference Image- and Video-Quality
Metrics to Adversarial Attacks [43.85564498709518]
This paper analyses modern metrics' robustness to different adversarial attacks.
Some metrics showed high resistance to adversarial attacks, which makes their usage in benchmarks safer than vulnerable metrics.
arXiv Detail & Related papers (2023-10-10T19:21:41Z) - Towards Robust Text-Prompted Semantic Criterion for In-the-Wild Video
Quality Assessment [54.31355080688127]
We introduce a text-prompted Semantic Affinity Quality Index (SAQI) and its localized version (SAQI-Local) using Contrastive Language-Image Pre-training (CLIP)
BVQI-Local demonstrates unprecedented performance, surpassing existing zero-shot indices by at least 24% on all datasets.
We conduct comprehensive analyses to investigate different quality concerns of distinct indices, demonstrating the effectiveness and rationality of our design.
arXiv Detail & Related papers (2023-04-28T08:06:05Z) - Video compression dataset and benchmark of learning-based video-quality
metrics [55.41644538483948]
We present a new benchmark for video-quality metrics that evaluates video compression.
It is based on a new dataset consisting of about 2,500 streams encoded using different standards.
Subjective scores were collected using crowdsourced pairwise comparisons.
arXiv Detail & Related papers (2022-11-22T09:22:28Z) - Universal Perturbation Attack on Differentiable No-Reference Image- and
Video-Quality Metrics [0.0]
Some attacks can deceive image- and video-quality metrics.
We propose a new method to attack differentiable no-reference quality metrics through universal perturbation.
arXiv Detail & Related papers (2022-11-01T10:28:13Z) - A Perceptual Quality Metric for Video Frame Interpolation [6.743340926667941]
As video frame results often unique artifacts, existing quality metrics sometimes are not consistent with human perception when measuring the results.
Some recent deep learning-based quality metrics are shown more consistent with human judgments, but their performance on videos is compromised since they do not consider temporal information.
Our method learns perceptual features directly from videos instead of individual frames.
arXiv Detail & Related papers (2022-10-04T19:56:10Z) - NSNet: Non-saliency Suppression Sampler for Efficient Video Recognition [89.84188594758588]
A novel Non-saliency Suppression Network (NSNet) is proposed to suppress the responses of non-salient frames.
NSNet achieves the state-of-the-art accuracy-efficiency trade-off and presents a significantly faster (2.44.3x) practical inference speed than state-of-the-art methods.
arXiv Detail & Related papers (2022-07-21T09:41:22Z) - A Coding Framework and Benchmark towards Low-Bitrate Video Understanding [63.05385140193666]
We propose a traditional-neural mixed coding framework that takes advantage of both traditional codecs and neural networks (NNs)
The framework is optimized by ensuring that a transportation-efficient semantic representation of the video is preserved.
We build a low-bitrate video understanding benchmark with three downstream tasks on eight datasets, demonstrating the notable superiority of our approach.
arXiv Detail & Related papers (2022-02-06T16:29:15Z) - A Variational Auto-Encoder Approach for Image Transmission in Wireless
Channel [4.82810058837951]
We investigate the performance of variational auto-encoders and compare the results with standard auto-encoders.
Our experiments demonstrate that the SSIM metric visually improves the quality of the reconstructed images at the receiver.
arXiv Detail & Related papers (2020-10-08T13:35:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.