DIMBA: Discretely Masked Black-Box Attack in Single Object Tracking
- URL: http://arxiv.org/abs/2207.08044v1
- Date: Sun, 17 Jul 2022 00:17:40 GMT
- Title: DIMBA: Discretely Masked Black-Box Attack in Single Object Tracking
- Authors: Xiangyu Yin, Wenjie Ruan, Jonathan Fieldsend
- Abstract summary: adversarial attack can force a CNN-based model to produce an incorrect output by craftily manipulating human-imperceptible input.
We propose a novel adversarial attack method to generate noises for single object tracking under black-box settings.
Our method requires fewer queries on frames of a video to manipulate competitive or even better attack performance.
- Score: 5.672132510411465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The adversarial attack can force a CNN-based model to produce an incorrect
output by craftily manipulating human-imperceptible input. Exploring such
perturbations can help us gain a deeper understanding of the vulnerability of
neural networks, and provide robustness to deep learning against miscellaneous
adversaries. Despite extensive studies focusing on the robustness of image,
audio, and NLP, works on adversarial examples of visual object tracking --
especially in a black-box manner -- are quite lacking. In this paper, we
propose a novel adversarial attack method to generate noises for single object
tracking under black-box settings, where perturbations are merely added on
initial frames of tracking sequences, which is difficult to be noticed from the
perspective of a whole video clip. Specifically, we divide our algorithm into
three components and exploit reinforcement learning for localizing important
frame patches precisely while reducing unnecessary computational queries
overhead. Compared to existing techniques, our method requires fewer queries on
initialized frames of a video to manipulate competitive or even better attack
performance. We test our algorithm in both long-term and short-term datasets,
including OTB100, VOT2018, UAV123, and LaSOT. Extensive experiments demonstrate
the effectiveness of our method on three mainstream types of trackers:
discrimination, Siamese-based, and reinforcement learning-based trackers.
Related papers
- Beyond Pretrained Features: Noisy Image Modeling Provides Adversarial
Defense [52.66971714830943]
Masked image modeling (MIM) has made it a prevailing framework for self-supervised visual representation learning.
In this paper, we investigate how this powerful self-supervised learning paradigm can provide adversarial robustness to downstream classifiers.
We propose an adversarial defense method, referred to as De3, by exploiting the pretrained decoder for denoising.
arXiv Detail & Related papers (2023-02-02T12:37:24Z) - RamBoAttack: A Robust Query Efficient Deep Neural Network Decision
Exploit [9.93052896330371]
We develop a robust query efficient attack capable of avoiding entrapment in a local minimum and misdirection from noisy gradients.
The RamBoAttack is more robust to the different sample inputs available to an adversary and the targeted class.
arXiv Detail & Related papers (2021-12-10T01:25:24Z) - PAT: Pseudo-Adversarial Training For Detecting Adversarial Videos [20.949656274807904]
We propose a novel yet simple algorithm called Pseudo-versa-Adrial Training (PAT) to detect the adversarial frames in a video without requiring knowledge of the attack.
Experimental results on UCF-101 and 20BN-Jester datasets show that PAT can detect the adversarial video frames and videos with a high detection rate.
arXiv Detail & Related papers (2021-09-13T04:05:46Z) - IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for
Visual Object Tracking [70.14487738649373]
Adrial attack arises due to the vulnerability of deep neural networks to perceive input samples injected with imperceptible perturbations.
We propose a decision-based black-box attack method for visual object tracking.
We validate the proposed IoU attack on state-of-the-art deep trackers.
arXiv Detail & Related papers (2021-03-27T16:20:32Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z) - Robust Tracking against Adversarial Attacks [69.59717023941126]
We first attempt to generate adversarial examples on top of video sequences to improve the tracking robustness against adversarial attacks.
We apply the proposed adversarial attack and defense approaches to state-of-the-art deep tracking algorithms.
arXiv Detail & Related papers (2020-07-20T08:05:55Z) - Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises [87.53808756910452]
A cooling-shrinking attack method is proposed to deceive state-of-the-art SiameseRPN-based trackers.
Our method has good transferability and is able to deceive other top-performance trackers such as DaSiamRPN, DaSiamRPN-UpdateNet, and DiMP.
arXiv Detail & Related papers (2020-03-21T07:13:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.