Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises
- URL: http://arxiv.org/abs/2003.09595v1
- Date: Sat, 21 Mar 2020 07:13:40 GMT
- Title: Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises
- Authors: Bin Yan and Dong Wang and Huchuan Lu and Xiaoyun Yang
- Abstract summary: A cooling-shrinking attack method is proposed to deceive state-of-the-art SiameseRPN-based trackers.
Our method has good transferability and is able to deceive other top-performance trackers such as DaSiamRPN, DaSiamRPN-UpdateNet, and DiMP.
- Score: 87.53808756910452
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial attack of CNN aims at deceiving models to misbehave by adding
imperceptible perturbations to images. This feature facilitates to understand
neural networks deeply and to improve the robustness of deep learning models.
Although several works have focused on attacking image classifiers and object
detectors, an effective and efficient method for attacking single object
trackers of any target in a model-free way remains lacking. In this paper, a
cooling-shrinking attack method is proposed to deceive state-of-the-art
SiameseRPN-based trackers. An effective and efficient perturbation generator is
trained with a carefully designed adversarial loss, which can simultaneously
cool hot regions where the target exists on the heatmaps and force the
predicted bounding box to shrink, making the tracked target invisible to
trackers. Numerous experiments on OTB100, VOT2018, and LaSOT datasets show that
our method can effectively fool the state-of-the-art SiameseRPN++ tracker by
adding small perturbations to the template or the search regions. Besides, our
method has good transferability and is able to deceive other top-performance
trackers such as DaSiamRPN, DaSiamRPN-UpdateNet, and DiMP. The source codes are
available at https://github.com/MasterBin-IIAU/CSA.
Related papers
- TEN-GUARD: Tensor Decomposition for Backdoor Attack Detection in Deep
Neural Networks [3.489779105594534]
We introduce a novel approach to backdoor detection using two tensor decomposition methods applied to network activations.
This has a number of advantages relative to existing detection methods, including the ability to analyze multiple models at the same time.
Results show that our method detects backdoored networks more accurately and efficiently than current state-of-the-art methods.
arXiv Detail & Related papers (2024-01-06T03:08:28Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for
Visual Object Tracking [70.14487738649373]
Adrial attack arises due to the vulnerability of deep neural networks to perceive input samples injected with imperceptible perturbations.
We propose a decision-based black-box attack method for visual object tracking.
We validate the proposed IoU attack on state-of-the-art deep trackers.
arXiv Detail & Related papers (2021-03-27T16:20:32Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial
Attacks for Online Visual Object Trackers [81.90113217334424]
We propose a framework to generate a single temporally transferable adversarial perturbation from the object template image only.
This perturbation can then be added to every search image, which comes at virtually no cost, and still, successfully fool the tracker.
arXiv Detail & Related papers (2020-12-30T15:05:53Z) - Efficient Adversarial Attacks for Visual Object Tracking [73.43180372379594]
We present an end-to-end network FAN (Fast Attack Network) that uses a novel drift loss combined with the embedded feature loss to attack the Siamese network based trackers.
Under a single GPU, FAN is efficient in the training speed and has a strong attack performance.
arXiv Detail & Related papers (2020-08-01T08:47:58Z) - Miss the Point: Targeted Adversarial Attack on Multiple Landmark
Detection [29.83857022733448]
This paper is the first to study how fragile a CNN-based model on multiple landmark detection to adversarial perturbations.
We propose a novel Adaptive Targeted Iterative FGSM attack against the state-of-the-art models in multiple landmark detection.
arXiv Detail & Related papers (2020-07-10T07:58:35Z) - Luring of transferable adversarial perturbations in the black-box
paradigm [0.0]
We present a new approach to improve the robustness of a model against black-box transfer attacks.
A removable additional neural network is included in the target model, and is designed to induce the textitluring effect.
Our deception-based method only needs to have access to the predictions of the target model and does not require a labeled data set.
arXiv Detail & Related papers (2020-04-10T06:48:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.