PapMOT: Exploring Adversarial Patch Attack against Multiple Object Tracking
- URL: http://arxiv.org/abs/2504.09361v1
- Date: Sat, 12 Apr 2025 22:45:52 GMT
- Title: PapMOT: Exploring Adversarial Patch Attack against Multiple Object Tracking
- Authors: Jiahuan Long, Tingsong Jiang, Wen Yao, Shuai Jia, Weijia Zhang, Weien Zhou, Chao Ma, Xiaoqian Chen,
- Abstract summary: PapMOT can generate physical adversarial patches against MOT for both digital and physical scenarios.<n>We introduce a patch enhancement strategy to further degrade the temporal consistency of tracking results across video frames.<n>We also validate the effectiveness of PapMOT for physical attacks by deploying printed adversarial patches in the real world.
- Score: 13.524551222453654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tracking multiple objects in a continuous video stream is crucial for many computer vision tasks. It involves detecting and associating objects with their respective identities across successive frames. Despite significant progress made in multiple object tracking (MOT), recent studies have revealed the vulnerability of existing MOT methods to adversarial attacks. Nevertheless, all of these attacks belong to digital attacks that inject pixel-level noise into input images, and are therefore ineffective in physical scenarios. To fill this gap, we propose PapMOT, which can generate physical adversarial patches against MOT for both digital and physical scenarios. Besides attacking the detection mechanism, PapMOT also optimizes a printable patch that can be detected as new targets to mislead the identity association process. Moreover, we introduce a patch enhancement strategy to further degrade the temporal consistency of tracking results across video frames, resulting in more aggressive attacks. We further develop new evaluation metrics to assess the robustness of MOT against such attacks. Extensive evaluations on multiple datasets demonstrate that our PapMOT can successfully attack various architectures of MOT trackers in digital scenarios. We also validate the effectiveness of PapMOT for physical attacks by deploying printed adversarial patches in the real world.
Related papers
- PBCAT: Patch-based composite adversarial training against physically realizable attacks on object detection [27.75925749085402]
Adversarial Training has been recognized as the most effective defense against adversarial attacks.<n>We propose PBCAT, a novel Patch-Based Composite Adversarial Training strategy.
arXiv Detail & Related papers (2025-06-30T07:36:21Z) - BankTweak: Adversarial Attack against Multi-Object Trackers by Manipulating Feature Banks [2.8931452761678345]
We present textsfBankTweak, a novel adversarial attack designed for multi-object tracking (MOT) trackers.
Our method substantially surpasses existing attacks, exposing the vulnerability of the tracking-by-detection framework.
arXiv Detail & Related papers (2024-08-22T20:35:46Z) - Towards Robust Semantic Segmentation against Patch-based Attack via Attention Refinement [68.31147013783387]
We observe that the attention mechanism is vulnerable to patch-based adversarial attacks.
In this paper, we propose a Robust Attention Mechanism (RAM) to improve the robustness of the semantic segmentation model.
arXiv Detail & Related papers (2024-01-03T13:58:35Z) - MVPatch: More Vivid Patch for Adversarial Camouflaged Attacks on Object Detectors in the Physical World [7.1343035828597685]
We introduce generalization theory into the context of Adversarial Patches (APs)
We propose a Dual-Perception-Based Framework (DPBF) to generate the More Vivid Patch (MVPatch), which enhances transferability, stealthiness, and practicality.
MVPatch achieves superior transferability and a natural appearance in both digital and physical domains, underscoring its effectiveness and stealthiness.
arXiv Detail & Related papers (2023-12-29T01:52:22Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - DRSM: De-Randomized Smoothing on Malware Classifier Providing Certified
Robustness [58.23214712926585]
We develop a certified defense, DRSM (De-Randomized Smoothed MalConv), by redesigning the de-randomized smoothing technique for the domain of malware detection.
Specifically, we propose a window ablation scheme to provably limit the impact of adversarial bytes while maximally preserving local structures of the executables.
We are the first to offer certified robustness in the realm of static detection of malware executables.
arXiv Detail & Related papers (2023-03-20T17:25:22Z) - CBA: Contextual Background Attack against Optical Aerial Detection in
the Physical World [8.826711009649133]
Patch-based physical attacks have increasingly aroused concerns.
Most existing methods focus on obscuring targets captured on the ground, and some of these methods are simply extended to deceive aerial detectors.
We propose Contextual Background Attack (CBA), a novel physical attack framework against aerial detection, which can achieve strong attack efficacy and transferability in the physical world even without smudging the interested objects at all.
arXiv Detail & Related papers (2023-02-27T05:10:27Z) - Effectiveness of Moving Target Defenses for Adversarial Attacks in
ML-based Malware Detection [0.0]
Moving target defenses (MTDs) to counter adversarial ML attacks have been proposed in recent years.
We study for the first time the effectiveness of several recent MTDs for adversarial ML attacks applied to the malware detection domain.
We show that transferability and query attack strategies can achieve high levels of evasion against these defenses.
arXiv Detail & Related papers (2023-02-01T16:03:34Z) - Benchmarking Adversarial Patch Against Aerial Detection [11.591143898488312]
A novel adaptive-patch-based physical attack (AP-PA) framework is proposed.
AP-PA generates adversarial patches that are adaptive in both physical dynamics and varying scales.
We establish one of the first comprehensive, coherent, and rigorous benchmarks to evaluate the attack efficacy of adversarial patches on aerial detection tasks.
arXiv Detail & Related papers (2022-10-30T07:55:59Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial
Attacks for Online Visual Object Trackers [81.90113217334424]
We propose a framework to generate a single temporally transferable adversarial perturbation from the object template image only.
This perturbation can then be added to every search image, which comes at virtually no cost, and still, successfully fool the tracker.
arXiv Detail & Related papers (2020-12-30T15:05:53Z) - Reliable evaluation of adversarial robustness with an ensemble of
diverse parameter-free attacks [65.20660287833537]
In this paper we propose two extensions of the PGD-attack overcoming failures due to suboptimal step size and problems of the objective function.
We then combine our novel attacks with two complementary existing ones to form a parameter-free, computationally affordable and user-independent ensemble of attacks to test adversarial robustness.
arXiv Detail & Related papers (2020-03-03T18:15:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.