LogoStyleFool: Vitiating Video Recognition Systems via Logo Style Transfer
- URL: http://arxiv.org/abs/2312.09935v2
- Date: Mon, 1 Apr 2024 05:57:55 GMT
- Title: LogoStyleFool: Vitiating Video Recognition Systems via Logo Style Transfer
- Authors: Yuxin Cao, Ziyu Zhao, Xi Xiao, Derui Wang, Minhui Xue, Jin Lu,
- Abstract summary: We propose a novel attack framework named LogoStyleFool by adding a stylized logo to the clean video.
We separate the attack into three stages: style reference selection, reinforcement-learning-based logo style transfer, and perturbation optimization.
Experimental results substantiate the overall superiority of LogoStyleFool over three state-of-the-art patch-based attacks in terms of attack performance and semantic preservation.
- Score: 17.191978308873814
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Video recognition systems are vulnerable to adversarial examples. Recent studies show that style transfer-based and patch-based unrestricted perturbations can effectively improve attack efficiency. These attacks, however, face two main challenges: 1) Adding large stylized perturbations to all pixels reduces the naturalness of the video and such perturbations can be easily detected. 2) Patch-based video attacks are not extensible to targeted attacks due to the limited search space of reinforcement learning that has been widely used in video attacks recently. In this paper, we focus on the video black-box setting and propose a novel attack framework named LogoStyleFool by adding a stylized logo to the clean video. We separate the attack into three stages: style reference selection, reinforcement-learning-based logo style transfer, and perturbation optimization. We solve the first challenge by scaling down the perturbation range to a regional logo, while the second challenge is addressed by complementing an optimization stage after reinforcement learning. Experimental results substantiate the overall superiority of LogoStyleFool over three state-of-the-art patch-based attacks in terms of attack performance and semantic preservation. Meanwhile, LogoStyleFool still maintains its performance against two existing patch-based defense methods. We believe that our research is beneficial in increasing the attention of the security community to such subregional style transfer attacks.
Related papers
- UniVST: A Unified Framework for Training-free Localized Video Style Transfer [66.69471376934034]
This paper presents UniVST, a unified framework for localized video style transfer.
It operates without the need for training, offering a distinct advantage over existing methods that transfer style across entire videos.
arXiv Detail & Related papers (2024-10-26T05:28:02Z) - Query-Efficient Video Adversarial Attack with Stylized Logo [17.268709979991996]
Video classification systems based on Deep Neural Networks (DNNs) are highly vulnerable to adversarial examples.
We propose a novel black-box video attack framework, called Stylized Logo Attack (SLA)
SLA is conducted through three steps. The first step involves building a style references set for logos, which can not only make the generated examples more natural, but also carry more target class features in the targeted attacks.
arXiv Detail & Related papers (2024-08-22T03:19:09Z) - LocalStyleFool: Regional Video Style Transfer Attack Using Segment Anything Model [19.37714374680383]
LocalStyleFool is an improved black-box video adversarial attack that superimposes regional style-transfer-based perturbations on videos.
We demonstrate that LocalStyleFool can improve both intra-frame and inter-frame naturalness through a human-assessed survey.
arXiv Detail & Related papers (2024-03-18T10:53:00Z) - Generating Transferable and Stealthy Adversarial Patch via
Attention-guided Adversarial Inpainting [12.974292128917222]
We propose an innovative two-stage adversarial patch attack called Adv-Inpainting.
In the first stage, we extract style features and identity features from the attacker and target faces, respectively.
The proposed layer can adaptively fuse identity and style embeddings by fully exploiting priority contextual information.
In the second stage, we design an Adversarial Patch Refinement Network (APR-Net) with a novel boundary variance loss.
arXiv Detail & Related papers (2023-08-10T03:44:10Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - StyleFool: Fooling Video Classification Systems via Style Transfer [28.19682215735232]
StyleFool is a black-box video adversarial attack via style transfer to fool the video classification system.
StyleFool outperforms the state-of-the-art adversarial attacks in terms of the number of queries and the robustness against existing defenses.
arXiv Detail & Related papers (2022-03-30T02:18:16Z) - Projective Ranking-based GNN Evasion Attacks [52.85890533994233]
Graph neural networks (GNNs) offer promising learning methods for graph-related tasks.
GNNs are at risk of adversarial attacks.
arXiv Detail & Related papers (2022-02-25T21:52:09Z) - Attacking Video Recognition Models with Bullet-Screen Comments [79.53159486470858]
We introduce a novel adversarial attack, which attacks video recognition models with bullet-screen comment (BSC) attacks.
BSCs can be regarded as a kind of meaningful patch, adding it to a clean video will not affect people' s understanding of the video content, nor will arouse people' s suspicion.
arXiv Detail & Related papers (2021-10-29T08:55:50Z) - Overcomplete Representations Against Adversarial Videos [72.04912755926524]
We propose a novel Over-and-Under complete restoration network for Defending against adversarial videos (OUDefend)
OUDefend is designed to balance local and global features by learning those two representations.
Experimental results show that the defenses focusing on images may be ineffective to videos, while OUDefend enhances robustness against different types of adversarial videos.
arXiv Detail & Related papers (2020-12-08T08:00:17Z) - Towards Feature Space Adversarial Attack [18.874224858723494]
We propose a new adversarial attack to Deep Neural Networks for image classification.
Our attack focuses on perturbing abstract features, more specifically, features that denote styles.
We show that our attack can generate adversarial samples that are more natural-looking than the state-of-the-art attacks.
arXiv Detail & Related papers (2020-04-26T13:56:31Z) - Deflecting Adversarial Attacks [94.85315681223702]
We present a new approach towards ending this cycle where we "deflect" adversarial attacks by causing the attacker to produce an input that resembles the attack's target class.
We first propose a stronger defense based on Capsule Networks that combines three detection mechanisms to achieve state-of-the-art detection performance.
arXiv Detail & Related papers (2020-02-18T06:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.