Adversarial Bounding Boxes Generation (ABBG) Attack against Visual Object Trackers
- URL: http://arxiv.org/abs/2411.17468v1
- Date: Tue, 26 Nov 2024 14:30:36 GMT
- Title: Adversarial Bounding Boxes Generation (ABBG) Attack against Visual Object Trackers
- Authors: Fatemeh Nourilenjan Nokabadi, Jean-Francois Lalonde, Christian Gagné,
- Abstract summary: Adversarial perturbations aim to deceive neural networks into predicting inaccurate results.
For visual object trackers, adversarial attacks have been developed to generate perturbations by manipulating the outputs.
We present a novel white-box approach to attack visual object trackers with transformer backbones using only one bounding box.
- Score: 6.6810237114686615
- License:
- Abstract: Adversarial perturbations aim to deceive neural networks into predicting inaccurate results. For visual object trackers, adversarial attacks have been developed to generate perturbations by manipulating the outputs. However, transformer trackers predict a specific bounding box instead of an object candidate list, which limits the applicability of many existing attack scenarios. To address this issue, we present a novel white-box approach to attack visual object trackers with transformer backbones using only one bounding box. From the tracker predicted bounding box, we generate a list of adversarial bounding boxes and compute the adversarial loss for those bounding boxes. Experimental results demonstrate that our simple yet effective attack outperforms existing attacks against several robust transformer trackers, including TransT-M, ROMTrack, and MixFormer, on popular benchmark tracking datasets such as GOT-10k, UAV123, and VOT2022STS.
Related papers
- TrackPGD: A White-box Attack using Binary Masks against Robust Transformer Trackers [6.115755665318123]
Object trackers with transformer backbones have achieved robust performance on visual object tracking datasets.
Due to the backbone differences, the adversarial white-box attacks proposed for object tracking are not transferable to all types of trackers.
We are proposing a novel white-box attack named TrackPGD, which relies on the predicted object binary mask to attack the robust transformer trackers.
arXiv Detail & Related papers (2024-07-04T14:02:12Z) - Reproducibility Study on Adversarial Attacks Against Robust Transformer Trackers [18.615714086028632]
New transformer networks have been integrated into object tracking pipelines and have demonstrated strong performance on the latest benchmarks.
This paper focuses on understanding how transformer trackers behave under adversarial attacks and how different attacks perform on tracking datasets as their parameters change.
arXiv Detail & Related papers (2024-06-03T20:13:38Z) - Parallel Rectangle Flip Attack: A Query-based Black-box Attack against
Object Detection [89.08832589750003]
We propose a Parallel Rectangle Flip Attack (PRFA) via random search to avoid sub-optimal detection near the attacked region.
Our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.
arXiv Detail & Related papers (2022-01-22T06:00:17Z) - ByteTrack: Multi-Object Tracking by Associating Every Detection Box [51.93588012109943]
Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects in videos.
Most methods obtain identities by associating detection boxes whose scores are higher than a threshold.
We present a simple, effective and generic association method, called BYTE, tracking BY associaTing every detection box instead of only the high score ones.
arXiv Detail & Related papers (2021-10-13T17:01:26Z) - IoU Attack: Towards Temporally Coherent Black-Box Adversarial Attack for
Visual Object Tracking [70.14487738649373]
Adrial attack arises due to the vulnerability of deep neural networks to perceive input samples injected with imperceptible perturbations.
We propose a decision-based black-box attack method for visual object tracking.
We validate the proposed IoU attack on state-of-the-art deep trackers.
arXiv Detail & Related papers (2021-03-27T16:20:32Z) - Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial
Attacks for Online Visual Object Trackers [81.90113217334424]
We propose a framework to generate a single temporally transferable adversarial perturbation from the object template image only.
This perturbation can then be added to every search image, which comes at virtually no cost, and still, successfully fool the tracker.
arXiv Detail & Related papers (2020-12-30T15:05:53Z) - Efficient Adversarial Attacks for Visual Object Tracking [73.43180372379594]
We present an end-to-end network FAN (Fast Attack Network) that uses a novel drift loss combined with the embedded feature loss to attack the Siamese network based trackers.
Under a single GPU, FAN is efficient in the training speed and has a strong attack performance.
arXiv Detail & Related papers (2020-08-01T08:47:58Z) - Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises [87.53808756910452]
A cooling-shrinking attack method is proposed to deceive state-of-the-art SiameseRPN-based trackers.
Our method has good transferability and is able to deceive other top-performance trackers such as DaSiamRPN, DaSiamRPN-UpdateNet, and DiMP.
arXiv Detail & Related papers (2020-03-21T07:13:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.