Accurate Anchor Free Tracking
- URL: http://arxiv.org/abs/2006.07560v1
- Date: Sat, 13 Jun 2020 04:42:32 GMT
- Title: Accurate Anchor Free Tracking
- Authors: Shengyun Peng and Yunxuan Yu and Kun Wang and Lei He
- Abstract summary: This paper develops the first Anchor Free Siamese Network (AFSN)
A target object is defined by a bounding box center, tracking offset, and object size.
We compare AFSN to the best anchor-based trackers with source codes available for each benchmark.
- Score: 9.784386353369483
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visual object tracking is an important application of computer vision.
Recently, Siamese based trackers have achieved good accuracy. However, most of
Siamese based trackers are not efficient, as they exhaustively search potential
object locations to define anchors and then classify each anchor (i.e., a
bounding box). This paper develops the first Anchor Free Siamese Network
(AFSN). Specifically, a target object is defined by a bounding box center,
tracking offset, and object size. All three are regressed by Siamese network
with no additional classification or regional proposal, and performed once for
each frame. We also tune the stride and receptive field for Siamese network,
and further perform ablation experiments to quantitatively illustrate the
effectiveness of our AFSN. We evaluate AFSN using five most commonly used
benchmarks and compare to the best anchor-based trackers with source codes
available for each benchmark. AFSN is 3-425 times faster than these best anchor
based trackers. AFSN is also 5.97% to 12.4% more accurate in terms of all
metrics for benchmark sets OTB2015, VOT2015, VOT2016, VOT2018 and TrackingNet,
except that SiamRPN++ is 4% better than AFSN in terms of Expected Average
Overlap (EAO) on VOT2018 (but SiamRPN++ is 3.9 times slower).
Related papers
- Predicting the Best of N Visual Trackers [34.93745058337489]
No single tracker remains the best performer across all tracking attributes and datasets.
To bridge this gap, we predict the "Best of the N Trackers", called the BofN meta-tracker.
We also introduce a frame-level BofN meta-tracker which keeps predicting best performer after regular temporal intervals.
arXiv Detail & Related papers (2024-07-22T15:17:09Z) - Visual Object Tracking with Discriminative Filters and Siamese Networks:
A Survey and Outlook [97.27199633649991]
Discriminative Correlation Filters (DCFs) and deep Siamese Networks (SNs) have emerged as dominating tracking paradigms.
This survey presents a systematic and thorough review of more than 90 DCFs and Siamese trackers, based on results in nine tracking benchmarks.
arXiv Detail & Related papers (2021-12-06T07:57:10Z) - SiamAPN++: Siamese Attentional Aggregation Network for Real-Time UAV
Tracking [16.78336740951222]
A novel attentional Siamese tracker (SiamAPN++) is proposed for real-time UAV tracking.
SiamAPN++ achieves promising tracking results with real-time speed.
arXiv Detail & Related papers (2021-06-16T14:28:57Z) - Two stages for visual object tracking [13.851408246039515]
Siamese-based trackers have achived promising performance on visual object tracking tasks.
In this paper, we propose a novel tracker with two-stages: detection and segmentation.
Our approach achieves state-of-the-art results, with the EAO of 52.6$%$ on VOT2016, 51.3$%$ on VOT2018, and 39.0$%$ on VOT 2019 datasets.
arXiv Detail & Related papers (2021-04-28T09:11:33Z) - SiamCorners: Siamese Corner Networks for Visual Tracking [39.43480791427431]
We propose a simple yet effective anchor-free tracker (named Siamese corner networks, SiamCorners)
By tracking a target as a pair of corners, we avoid the need to design the anchor boxes.
SiamCorners achieves a 53.7% AUC on NFS30 and a 61.4% AUC on UAV123, while still running at 42 frames per second (FPS)
arXiv Detail & Related papers (2021-04-15T08:23:30Z) - CRACT: Cascaded Regression-Align-Classification for Robust Visual
Tracking [97.84109669027225]
We introduce an improved proposal refinement module, Cascaded Regression-Align- Classification (CRAC)
CRAC yields new state-of-the-art performances on many benchmarks.
In experiments on seven benchmarks including OTB-2015, UAV123, NfS, VOT-2018, TrackingNet, GOT-10k and LaSOT, our CRACT exhibits very promising results in comparison with state-of-the-art competitors.
arXiv Detail & Related papers (2020-11-25T02:18:33Z) - Graph Attention Tracking [76.19829750144564]
We propose a simple target-aware Siamese graph attention network for general object tracking.
Experiments on challenging benchmarks including GOT-10k, UAV123, OTB-100 and LaSOT demonstrate that the proposed SiamGAT outperforms many state-of-the-art trackers.
arXiv Detail & Related papers (2020-11-23T04:26:45Z) - LaSOT: A High-quality Large-scale Single Object Tracking Benchmark [67.96196486540497]
We present LaSOT, a high-quality Large-scale Single Object Tracking benchmark.
LaSOT contains a diverse selection of 85 object classes, and offers 1,550 totaling more than 3.87 million frames.
Each video frame is carefully and manually annotated with a bounding box. This makes LaSOT, to our knowledge, the largest densely annotated tracking benchmark.
arXiv Detail & Related papers (2020-09-08T00:31:56Z) - Ocean: Object-aware Anchor-free Tracking [75.29960101993379]
The regression network in anchor-based methods is only trained on the positive anchor boxes.
We propose a novel object-aware anchor-free network to address this issue.
Our anchor-free tracker achieves state-of-the-art performance on five benchmarks.
arXiv Detail & Related papers (2020-06-18T17:51:39Z) - Siamese Box Adaptive Network for Visual Tracking [100.46025199664642]
We propose a simple yet effective visual tracking framework (named Siamese Box Adaptive Network, SiamBAN)
SiamBAN directly classifies objects and regresses their bounding boxes in a unified convolutional network (FCN)
SiamBAN achieves state-of-the-art performance and runs at 40 FPS, confirming its effectiveness and efficiency.
arXiv Detail & Related papers (2020-03-15T05:58:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.