Few-Shot Backdoor Attacks on Visual Object Tracking
- URL: http://arxiv.org/abs/2201.13178v1
- Date: Mon, 31 Jan 2022 12:38:58 GMT
- Title: Few-Shot Backdoor Attacks on Visual Object Tracking
- Authors: Yiming Li, Haoxiang Zhong, Xingjun Ma, Yong Jiang, Shu-Tao Xia
- Abstract summary: Visual object tracking (VOT) has been widely adopted in mission-critical applications, such as autonomous driving and intelligent surveillance systems.
We show that an adversary can easily implant hidden backdoors into VOT models by tempering with the training process.
We show that our attack is resistant to potential defenses, highlighting the vulnerability of VOT models to potential backdoor attacks.
- Score: 80.13936562708426
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visual object tracking (VOT) has been widely adopted in mission-critical
applications, such as autonomous driving and intelligent surveillance systems.
In current practice, third-party resources such as datasets, backbone networks,
and training platforms are frequently used to train high-performance VOT
models. Whilst these resources bring certain convenience, they also introduce
new security threats into VOT models. In this paper, we reveal such a threat
where an adversary can easily implant hidden backdoors into VOT models by
tempering with the training process. Specifically, we propose a simple yet
effective few-shot backdoor attack (FSBA) that optimizes two losses
alternately: 1) a \emph{feature loss} defined in the hidden feature space, and
2) the standard \emph{tracking loss}. We show that, once the backdoor is
embedded into the target model by our FSBA, it can trick the model to lose
track of specific objects even when the \emph{trigger} only appears in one or a
few frames. We examine our attack in both digital and physical-world settings
and show that it can significantly degrade the performance of state-of-the-art
VOT trackers. We also show that our attack is resistant to potential defenses,
highlighting the vulnerability of VOT models to potential backdoor attacks.
Related papers
- Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor [63.84477483795964]
Data-poisoning backdoor attacks are serious security threats to machine learning models.
In this paper, we focus on in-training backdoor defense, aiming to train a clean model even when the dataset may be potentially poisoned.
We propose a novel defense approach called PDB (Proactive Defensive Backdoor)
arXiv Detail & Related papers (2024-05-25T07:52:26Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - SoK: A Systematic Evaluation of Backdoor Trigger Characteristics in
Image Classification [21.424907311421197]
Deep learning is vulnerable to backdoor attacks that modify the training set to embed a secret functionality in the trained model.
This paper systematically analyzes the most relevant parameters for the backdoor attacks.
Our attacks cover the majority of backdoor settings in research, providing concrete directions for future works.
arXiv Detail & Related papers (2023-02-03T14:00:05Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - WaNet -- Imperceptible Warping-based Backdoor Attack [20.289889150949836]
A third-party model can be poisoned in training to work well in normal conditions but behave maliciously when a trigger pattern appears.
In this paper, we propose using warping-based triggers to attack third-party models.
The proposed backdoor outperforms the previous methods in a human inspection test by a wide margin, proving its stealthiness.
arXiv Detail & Related papers (2021-02-20T15:25:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.