IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for
Visual Object Tracking
- URL: http://arxiv.org/abs/2103.14938v1
- Date: Sat, 27 Mar 2021 16:20:32 GMT
- Title: IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for
Visual Object Tracking
- Authors: Shuai Jia, Yibing Song, Chao Ma, Xiaokang Yang
- Abstract summary: Adrial attack arises due to the vulnerability of deep neural networks to perceive input samples injected with imperceptible perturbations.
We propose a decision-based black-box attack method for visual object tracking.
We validate the proposed IoU attack on state-of-the-art deep trackers.
- Score: 70.14487738649373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attack arises due to the vulnerability of deep neural networks to
perceive input samples injected with imperceptible perturbations. Recently,
adversarial attack has been applied to visual object tracking to evaluate the
robustness of deep trackers. Assuming that the model structures of deep
trackers are known, a variety of white-box attack approaches to visual tracking
have demonstrated promising results. However, the model knowledge about deep
trackers is usually unavailable in real applications. In this paper, we propose
a decision-based black-box attack method for visual object tracking. In
contrast to existing black-box adversarial attack methods that deal with static
images for image classification, we propose IoU attack that sequentially
generates perturbations based on the predicted IoU scores from both current and
historical frames. By decreasing the IoU scores, the proposed attack method
degrades the accuracy of temporal coherent bounding boxes (i.e., object
motions) accordingly. In addition, we transfer the learned perturbations to the
next few frames to initialize temporal motion attack. We validate the proposed
IoU attack on state-of-the-art deep trackers (i.e., detection based,
correlation filter based, and long-term trackers). Extensive experiments on the
benchmark datasets indicate the effectiveness of the proposed IoU attack
method. The source code is available at
https://github.com/VISION-SJTU/IoUattack.
Related papers
- Overload: Latency Attacks on Object Detection for Edge Devices [47.9744734181236]
This paper investigates latency attacks on deep learning applications.
Unlike common adversarial attacks for misclassification, the goal of latency attacks is to increase the inference time.
We use object detection to demonstrate how such kind of attacks work.
arXiv Detail & Related papers (2023-04-11T17:24:31Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Detect and Defense Against Adversarial Examples in Deep Learning using
Natural Scene Statistics and Adaptive Denoising [12.378017309516965]
We propose a framework for defending DNN against ad-versarial samples.
The detector aims to detect AEs bycharacterizing them through the use of natural scenestatistic.
The proposed method outperforms the state-of-the-art defense techniques.
arXiv Detail & Related papers (2021-07-12T23:45:44Z) - Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial
Attacks for Online Visual Object Trackers [81.90113217334424]
We propose a framework to generate a single temporally transferable adversarial perturbation from the object template image only.
This perturbation can then be added to every search image, which comes at virtually no cost, and still, successfully fool the tracker.
arXiv Detail & Related papers (2020-12-30T15:05:53Z) - Efficient Adversarial Attacks for Visual Object Tracking [73.43180372379594]
We present an end-to-end network FAN (Fast Attack Network) that uses a novel drift loss combined with the embedded feature loss to attack the Siamese network based trackers.
Under a single GPU, FAN is efficient in the training speed and has a strong attack performance.
arXiv Detail & Related papers (2020-08-01T08:47:58Z) - Robust Tracking against Adversarial Attacks [69.59717023941126]
We first attempt to generate adversarial examples on top of video sequences to improve the tracking robustness against adversarial attacks.
We apply the proposed adversarial attack and defense approaches to state-of-the-art deep tracking algorithms.
arXiv Detail & Related papers (2020-07-20T08:05:55Z) - Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises [87.53808756910452]
A cooling-shrinking attack method is proposed to deceive state-of-the-art SiameseRPN-based trackers.
Our method has good transferability and is able to deceive other top-performance trackers such as DaSiamRPN, DaSiamRPN-UpdateNet, and DiMP.
arXiv Detail & Related papers (2020-03-21T07:13:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.