Robust Tracking against Adversarial Attacks
- URL: http://arxiv.org/abs/2007.09919v2
- Date: Wed, 29 Jul 2020 08:03:25 GMT
- Title: Robust Tracking against Adversarial Attacks
- Authors: Shuai Jia, Chao Ma, Yibing Song, and Xiaokang Yang
- Abstract summary: We first attempt to generate adversarial examples on top of video sequences to improve the tracking robustness against adversarial attacks.
We apply the proposed adversarial attack and defense approaches to state-of-the-art deep tracking algorithms.
- Score: 69.59717023941126
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While deep convolutional neural networks (CNNs) are vulnerable to adversarial
attacks, considerably few efforts have been paid to construct robust deep
tracking algorithms against adversarial attacks. Current studies on adversarial
attack and defense mainly reside in a single image. In this work, we first
attempt to generate adversarial examples on top of video sequences to improve
the tracking robustness against adversarial attacks. To this end, we take
temporal motion into consideration when generating lightweight perturbations
over the estimated tracking results frame-by-frame. On one hand, we add the
temporal perturbations into the original video sequences as adversarial
examples to greatly degrade the tracking performance. On the other hand, we
sequentially estimate the perturbations from input sequences and learn to
eliminate their effect for performance restoration. We apply the proposed
adversarial attack and defense approaches to state-of-the-art deep tracking
algorithms. Extensive evaluations on the benchmark datasets demonstrate that
our defense method not only eliminates the large performance drops caused by
adversarial attacks, but also achieves additional performance gains when deep
trackers are not under adversarial attacks.
Related papers
- Evaluating the Robustness of LiDAR Point Cloud Tracking Against Adversarial Attack [6.101494710781259]
We introduce a unified framework for conducting adversarial attacks within the context of 3D object tracking.
In addressing black-box attack scenarios, we introduce a novel transfer-based approach, the Target-aware Perturbation Generation (TAPG) algorithm.
Our experimental findings reveal a significant vulnerability in advanced tracking methods when subjected to both black-box and white-box attacks.
arXiv Detail & Related papers (2024-10-28T10:20:38Z) - Enhancing Tracking Robustness with Auxiliary Adversarial Defense Networks [1.7907721703063868]
Adrial attacks in visual object tracking have significantly degraded the performance of advanced trackers.
We propose an effective auxiliary pre-processing defense network, AADN, which performs defensive transformations on the input images before feeding them into the tracker.
arXiv Detail & Related papers (2024-02-28T01:42:31Z) - Efficient universal shuffle attack for visual object tracking [12.338273740874891]
We propose an offline universal adversarial attack called Efficient Universal Shuffle Attack.
It takes only one perturbation to cause the tracker malfunction on all videos.
Experimental results show that EUSA can significantly reduce the performance of state-of-the-art trackers.
arXiv Detail & Related papers (2022-03-14T07:48:06Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - TREATED:Towards Universal Defense against Textual Adversarial Attacks [28.454310179377302]
We propose TREATED, a universal adversarial detection method that can defend against attacks of various perturbation levels without making any assumptions.
Extensive experiments on three competitive neural networks and two widely used datasets show that our method achieves better detection performance than baselines.
arXiv Detail & Related papers (2021-09-13T03:31:20Z) - Improving the Adversarial Robustness for Speaker Verification by Self-Supervised Learning [95.60856995067083]
This work is among the first to perform adversarial defense for ASV without knowing the specific attack algorithms.
We propose to perform adversarial defense from two perspectives: 1) adversarial perturbation purification and 2) adversarial perturbation detection.
Experimental results show that our detection module effectively shields the ASV by detecting adversarial samples with an accuracy of around 80%.
arXiv Detail & Related papers (2021-06-01T07:10:54Z) - Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial
Attacks for Online Visual Object Trackers [81.90113217334424]
We propose a framework to generate a single temporally transferable adversarial perturbation from the object template image only.
This perturbation can then be added to every search image, which comes at virtually no cost, and still, successfully fool the tracker.
arXiv Detail & Related papers (2020-12-30T15:05:53Z) - Guided Adversarial Attack for Evaluating and Enhancing Adversarial
Defenses [59.58128343334556]
We introduce a relaxation term to the standard loss, that finds more suitable gradient-directions, increases attack efficacy and leads to more efficient adversarial training.
We propose Guided Adversarial Margin Attack (GAMA), which utilizes function mapping of the clean image to guide the generation of adversaries.
We also propose Guided Adversarial Training (GAT), which achieves state-of-the-art performance amongst single-step defenses.
arXiv Detail & Related papers (2020-11-30T16:39:39Z) - Temporal Sparse Adversarial Attack on Sequence-based Gait Recognition [56.844587127848854]
We demonstrate that the state-of-the-art gait recognition model is vulnerable to such attacks.
We employ a generative adversarial network based architecture to semantically generate adversarial high-quality gait silhouettes or video frames.
The experimental results show that if only one-fortieth of the frames are attacked, the accuracy of the target model drops dramatically.
arXiv Detail & Related papers (2020-02-22T10:08:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.