Physical ID-Transfer Attacks against Multi-Object Tracking via Adversarial Trajectory
- URL: http://arxiv.org/abs/2512.01934v1
- Date: Mon, 01 Dec 2025 17:47:19 GMT
- Title: Physical ID-Transfer Attacks against Multi-Object Tracking via Adversarial Trajectory
- Authors: Chenyi Wang, Yanmao Man, Raymond Muller, Ming Li, Z. Berkay Celik, Ryan Gerdes, Jonathan Petit,
- Abstract summary: In this paper, we present AdvTraj, the first online and physical ID-manipulation attack against tracking-by-detection MOT.<n>We show that AdvTraj can fool ID assignments with 100% success rate in various scenarios for white-box attacks against SORT.<n>We also propose two universal adversarial maneuvers that can be performed by a human walker/driver in daily scenarios.
- Score: 17.339598337831834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-Object Tracking (MOT) is a critical task in computer vision, with applications ranging from surveillance systems to autonomous driving. However, threats to MOT algorithms have yet been widely studied. In particular, incorrect association between the tracked objects and their assigned IDs can lead to severe consequences, such as wrong trajectory predictions. Previous attacks against MOT either focused on hijacking the trackers of individual objects, or manipulating the tracker IDs in MOT by attacking the integrated object detection (OD) module in the digital domain, which are model-specific, non-robust, and only able to affect specific samples in offline datasets. In this paper, we present AdvTraj, the first online and physical ID-manipulation attack against tracking-by-detection MOT, in which an attacker uses adversarial trajectories to transfer its ID to a targeted object to confuse the tracking system, without attacking OD. Our simulation results in CARLA show that AdvTraj can fool ID assignments with 100% success rate in various scenarios for white-box attacks against SORT, which also have high attack transferability (up to 93% attack success rate) against state-of-the-art (SOTA) MOT algorithms due to their common design principles. We characterize the patterns of trajectories generated by AdvTraj and propose two universal adversarial maneuvers that can be performed by a human walker/driver in daily scenarios. Our work reveals under-explored weaknesses in the object association phase of SOTA MOT systems, and provides insights into enhancing the robustness of such systems.
Related papers
- BlindGuard: Safeguarding LLM-based Multi-Agent Systems under Unknown Attacks [58.959622170433725]
BlindGuard is an unsupervised defense method that learns without requiring any attack-specific labels or prior knowledge of malicious behaviors.<n>We show that BlindGuard effectively detects diverse attack types (i.e., prompt injection, memory poisoning, and tool attack) across multi-agent systems.
arXiv Detail & Related papers (2025-08-11T16:04:47Z) - Assessing the Resilience of Automotive Intrusion Detection Systems to Adversarial Manipulation [6.349764856675644]
Adversarial attacks, particularly evasion attacks, can manipulate inputs to bypass detection by IDSs.<n>We consider three scenarios: white-box (attacker with full system knowledge), grey-box (partial system knowledge), and the more realistic black-box.<n>We evaluate the effectiveness of the proposed attacks against state-of-the-art IDSs on two publicly available datasets.
arXiv Detail & Related papers (2025-06-12T12:06:05Z) - PapMOT: Exploring Adversarial Patch Attack against Multiple Object Tracking [13.524551222453654]
PapMOT can generate physical adversarial patches against MOT for both digital and physical scenarios.<n>We introduce a patch enhancement strategy to further degrade the temporal consistency of tracking results across video frames.<n>We also validate the effectiveness of PapMOT for physical attacks by deploying printed adversarial patches in the real world.
arXiv Detail & Related papers (2025-04-12T22:45:52Z) - Black-Box Adversarial Attack on Vision Language Models for Autonomous Driving [65.61999354218628]
We take the first step toward designing black-box adversarial attacks specifically targeting vision-language models (VLMs) in autonomous driving systems.<n>We propose Cascading Adversarial Disruption (CAD), which targets low-level reasoning breakdown by generating and injecting semantics.<n>We present Risky Scene Induction, which addresses dynamic adaptation by leveraging a surrogate VLM to understand and construct high-level risky scenarios.
arXiv Detail & Related papers (2025-01-23T11:10:02Z) - BankTweak: Adversarial Attack against Multi-Object Trackers by Manipulating Feature Banks [2.8931452761678345]
We present textsfBankTweak, a novel adversarial attack designed for multi-object tracking (MOT) trackers.
Our method substantially surpasses existing attacks, exposing the vulnerability of the tracking-by-detection framework.
arXiv Detail & Related papers (2024-08-22T20:35:46Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Unified Transformer Tracker for Object Tracking [58.65901124158068]
We present the Unified Transformer Tracker (UTT) to address tracking problems in different scenarios with one paradigm.
A track transformer is developed in our UTT to track the target in both Single Object Tracking (SOT) and Multiple Object Tracking (MOT)
arXiv Detail & Related papers (2022-03-29T01:38:49Z) - Few-Shot Backdoor Attacks on Visual Object Tracking [80.13936562708426]
Visual object tracking (VOT) has been widely adopted in mission-critical applications, such as autonomous driving and intelligent surveillance systems.
We show that an adversary can easily implant hidden backdoors into VOT models by tempering with the training process.
We show that our attack is resistant to potential defenses, highlighting the vulnerability of VOT models to potential backdoor attacks.
arXiv Detail & Related papers (2022-01-31T12:38:58Z) - Tracklet-Switch Adversarial Attack against Pedestrian Multi-Object
Tracking Trackers [14.135239008740173]
We propose a novel adversarial attack method called Tracklet-Switch (TraSw) against the complete tracking pipeline of Multi-Object Tracking (MOT)
Experiments show that TraSw can achieve an extraordinarily high success attack rate of over 95% by attacking only four frames on average.
arXiv Detail & Related papers (2021-11-17T07:53:45Z) - Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial
Attacks for Online Visual Object Trackers [81.90113217334424]
We propose a framework to generate a single temporally transferable adversarial perturbation from the object template image only.
This perturbation can then be added to every search image, which comes at virtually no cost, and still, successfully fool the tracker.
arXiv Detail & Related papers (2020-12-30T15:05:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.