BankTweak: Adversarial Attack against Multi-Object Trackers by Manipulating Feature Banks
- URL: http://arxiv.org/abs/2408.12727v1
- Date: Thu, 22 Aug 2024 20:35:46 GMT
- Title: BankTweak: Adversarial Attack against Multi-Object Trackers by Manipulating Feature Banks
- Authors: Woojin Shin, Donghwa Kang, Daejin Choi, Brent Kang, Jinkyu Lee, Hyeongboo Baek,
- Abstract summary: We present textsfBankTweak, a novel adversarial attack designed for multi-object tracking (MOT) trackers.
Our method substantially surpasses existing attacks, exposing the vulnerability of the tracking-by-detection framework.
- Score: 2.8931452761678345
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-object tracking (MOT) aims to construct moving trajectories for objects, and modern multi-object trackers mainly utilize the tracking-by-detection methodology. Initial approaches to MOT attacks primarily aimed to degrade the detection quality of the frames under attack, thereby reducing accuracy only in those specific frames, highlighting a lack of \textit{efficiency}. To improve efficiency, recent advancements manipulate object positions to cause persistent identity (ID) switches during the association phase, even after the attack ends within a few frames. However, these position-manipulating attacks have inherent limitations, as they can be easily counteracted by adjusting distance-related parameters in the association phase, revealing a lack of \textit{robustness}. In this paper, we present \textsf{BankTweak}, a novel adversarial attack designed for MOT trackers, which features efficiency and robustness. \textsf{BankTweak} focuses on the feature extractor in the association phase and reveals vulnerability in the Hungarian matching method used by feature-based MOT systems. Exploiting the vulnerability, \textsf{BankTweak} induces persistent ID switches (addressing \textit{efficiency}) even after the attack ends by strategically injecting altered features into the feature banks without modifying object positions (addressing \textit{robustness}). To demonstrate the applicability, we apply \textsf{BankTweak} to three multi-object trackers (DeepSORT, StrongSORT, and MOTDT) with one-stage, two-stage, anchor-free, and transformer detectors. Extensive experiments on the MOT17 and MOT20 datasets show that our method substantially surpasses existing attacks, exposing the vulnerability of the tracking-by-detection framework to \textsf{BankTweak}.
Related papers
- YOLOv8-SMOT: An Efficient and Robust Framework for Real-Time Small Object Tracking via Slice-Assisted Training and Adaptive Association [5.07987775511372]
This paper details our championship-winning solution in the MVA 2025 "Finding Birds" Small Multi-Object Tracking Challenge (SMOT4SB)<n>It adopts the tracking-by-detection paradigm with targeted innovations at both the detection and association levels.<n>Our method achieves state-of-the-art performance on the SMOT4SB public test set, reaching an SO-HOTA score of textbf55.205.
arXiv Detail & Related papers (2025-07-16T09:51:19Z) - Robustifying 3D Perception via Least-Squares Graphs for Multi-Agent Object Tracking [43.11267507022928]
This paper proposes a novel mitigation framework on 3D LiDAR scene against adversarial noise.<n>We employ the least-squares graph tool to reduce the induced positional error of each detection's centroid.<n>An extensive evaluation study on the real-world V2V4Real dataset demonstrates that the proposed method significantly outperforms both single and multi-agent tracking frameworks.
arXiv Detail & Related papers (2025-07-07T08:41:08Z) - Robust Anti-Backdoor Instruction Tuning in LVLMs [53.766434746801366]
We introduce a lightweight, certified-agnostic defense framework for large visual language models (LVLMs)<n>Our framework finetunes only adapter modules and text embedding layers under instruction tuning.<n>Experiments against seven attacks on Flickr30k and MSCOCO demonstrate that ours reduces their attack success rate to nearly zero.
arXiv Detail & Related papers (2025-06-04T01:23:35Z) - PapMOT: Exploring Adversarial Patch Attack against Multiple Object Tracking [13.524551222453654]
PapMOT can generate physical adversarial patches against MOT for both digital and physical scenarios.
We introduce a patch enhancement strategy to further degrade the temporal consistency of tracking results across video frames.
We also validate the effectiveness of PapMOT for physical attacks by deploying printed adversarial patches in the real world.
arXiv Detail & Related papers (2025-04-12T22:45:52Z) - AnywhereDoor: Multi-Target Backdoor Attacks on Object Detection [9.539021752700823]
AnywhereDoor is a multi-target backdoor attack for object detection.
It allows adversaries to make objects disappear, fabricate new ones or mislabel them, either across all object classes or specific ones.
It improves attack success rates by 26% compared to adaptations of existing methods for such flexible control.
arXiv Detail & Related papers (2025-03-09T09:24:24Z) - Twin Trigger Generative Networks for Backdoor Attacks against Object Detection [14.578800906364414]
Object detectors, which are widely used in real-world applications, are vulnerable to backdoor attacks.
Most research on backdoor attacks has focused on image classification, with limited investigation into object detection.
We propose novel twin trigger generative networks to generate invisible triggers for implanting backdoors into models during training, and visible triggers for steady activation during inference.
arXiv Detail & Related papers (2024-11-23T03:46:45Z) - AnywhereDoor: Multi-Target Backdoor Attacks on Object Detection [9.539021752700823]
AnywhereDoor is a flexible backdoor attack tailored for object detection.
It provides attackers with a high degree of control, achieving an attack success rate improvement of nearly 80%.
arXiv Detail & Related papers (2024-11-21T15:50:59Z) - Detector Collapse: Physical-World Backdooring Object Detection to Catastrophic Overload or Blindness in Autonomous Driving [17.637155085620634]
Detector Collapse (DC) is a brand-new backdoor attack paradigm tailored for object detection.
DC is designed to instantly incapacitate detectors (i.e., severely impairing detector's performance and culminating in a denial-of-service)
We introduce a novel poisoning strategy exploiting natural objects, enabling DC to act as a practical backdoor in real-world environments.
arXiv Detail & Related papers (2024-04-17T13:12:14Z) - Multi-granular Adversarial Attacks against Black-box Neural Ranking Models [111.58315434849047]
We create high-quality adversarial examples by incorporating multi-granular perturbations.
We transform the multi-granular attack into a sequential decision-making process.
Our attack method surpasses prevailing baselines in both attack effectiveness and imperceptibility.
arXiv Detail & Related papers (2024-04-02T02:08:29Z) - Towards Robust Semantic Segmentation against Patch-based Attack via Attention Refinement [68.31147013783387]
We observe that the attention mechanism is vulnerable to patch-based adversarial attacks.
In this paper, we propose a Robust Attention Mechanism (RAM) to improve the robustness of the semantic segmentation model.
arXiv Detail & Related papers (2024-01-03T13:58:35Z) - Distractor-Aware Fast Tracking via Dynamic Convolutions and MOT
Philosophy [63.91005999481061]
A practical long-term tracker typically contains three key properties, i.e. an efficient model design, an effective global re-detection strategy and a robust distractor awareness mechanism.
We propose a two-task tracking frame work (named DMTrack) to achieve distractor-aware fast tracking via Dynamic convolutions (d-convs) and Multiple object tracking (MOT) philosophy.
Our tracker achieves state-of-the-art performance on the LaSOT, OxUvA, TLP, VOT2018LT and VOT 2019LT benchmarks and runs in real-time (3x faster
arXiv Detail & Related papers (2021-04-25T00:59:53Z) - Online Multiple Object Tracking with Cross-Task Synergy [120.70085565030628]
We propose a novel unified model with synergy between position prediction and embedding association.
The two tasks are linked by temporal-aware target attention and distractor attention, as well as identity-aware memory aggregation model.
arXiv Detail & Related papers (2021-04-01T10:19:40Z) - Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial
Attacks for Online Visual Object Trackers [81.90113217334424]
We propose a framework to generate a single temporally transferable adversarial perturbation from the object template image only.
This perturbation can then be added to every search image, which comes at virtually no cost, and still, successfully fool the tracker.
arXiv Detail & Related papers (2020-12-30T15:05:53Z) - IA-MOT: Instance-Aware Multi-Object Tracking with Motion Consistency [40.354708148590696]
"instance-aware MOT" (IA-MOT) can track multiple objects in either static or moving cameras.
Our proposed method won the first place in Track 3 of the BMTT Challenge in CVPR 2020 workshops.
arXiv Detail & Related papers (2020-06-24T03:53:36Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.