Tracking Small and Fast Moving Objects: A Benchmark
- URL: http://arxiv.org/abs/2209.04284v1
- Date: Fri, 9 Sep 2022 13:14:44 GMT
- Title: Tracking Small and Fast Moving Objects: A Benchmark
- Authors: Zhewen Zhang, Fuliang Wu, Yuming Qiu, Jingdong Liang, Shuiwang Li
- Abstract summary: We present TSFMO, a benchmark for textbfTracking textbfSmall and textbfFast textbfMoving textbfObjects.
To the best of our knowledge, TSFMO is the first benchmark dedicated to tracking small and fast moving objects, especially connected to sports.
To encourage future research, we proposed a novel tracker S-KeepTrack which surpasses all 20 evaluated approaches.
- Score: 0.1679937788852769
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With more and more large-scale datasets available for training, visual
tracking has made great progress in recent years. However, current research in
the field mainly focuses on tracking generic objects. In this paper, we present
TSFMO, a benchmark for \textbf{T}racking \textbf{S}mall and \textbf{F}ast
\textbf{M}oving \textbf{O}bjects. This benchmark aims to encourage research in
developing novel and accurate methods for this challenging task particularly.
TSFMO consists of 250 sequences with about 50k frames in total. Each frame in
these sequences is carefully and manually annotated with a bounding box. To the
best of our knowledge, TSFMO is the first benchmark dedicated to tracking small
and fast moving objects, especially connected to sports. To understand how
existing methods perform and to provide comparison for future research on
TSFMO, we extensively evaluate 20 state-of-the-art trackers on the benchmark.
The evaluation results exhibit that more effort are required to improve
tracking small and fast moving objects. Moreover, to encourage future research,
we proposed a novel tracker S-KeepTrack which surpasses all 20 evaluated
approaches. By releasing TSFMO, we expect to facilitate future researches and
applications of tracking small and fast moving objects. The TSFMO and
evaluation results as well as S-KeepTrack are available at
\url{https://github.com/CodeOfGithub/S-KeepTrack}.
Related papers
- Tracking Reflected Objects: A Benchmark [12.770787846444406]
We introduce TRO, a benchmark specifically for Tracking Reflected Objects.
TRO includes 200 sequences with around 70,000 frames, each carefully annotated with bounding boxes.
To provide a stronger baseline, we propose a new tracker, HiP-HaTrack, which uses hierarchical features to improve performance.
arXiv Detail & Related papers (2024-07-07T02:22:45Z) - Tracking with Human-Intent Reasoning [64.69229729784008]
This work proposes a new tracking task -- Instruction Tracking.
It involves providing implicit tracking instructions that require the trackers to perform tracking automatically in video frames.
TrackGPT is capable of performing complex reasoning-based tracking.
arXiv Detail & Related papers (2023-12-29T03:22:18Z) - Dense Optical Tracking: Connecting the Dots [82.79642869586587]
DOT is a novel, simple and efficient method for solving the problem of point tracking in a video.
We show that DOT is significantly more accurate than current optical flow techniques, outperforms sophisticated "universal trackers" like OmniMotion, and is on par with, or better than, the best point tracking algorithms like CoTracker.
arXiv Detail & Related papers (2023-12-01T18:59:59Z) - Iterative Scale-Up ExpansionIoU and Deep Features Association for
Multi-Object Tracking in Sports [26.33239898091364]
We propose a novel online and robust multi-object tracking approach named deep ExpansionIoU (Deep-EIoU) for sports scenarios.
Unlike conventional methods, we abandon the use of the Kalman filter and leverage the iterative scale-up ExpansionIoU and deep features for robust tracking in sports scenarios.
Our proposed method demonstrates remarkable effectiveness in tracking irregular motion objects, achieving a score of 77.2% on the SportsMOT dataset and 85.4% on the SoccerNet-Tracking dataset.
arXiv Detail & Related papers (2023-06-22T17:47:08Z) - STMTrack: Template-free Visual Tracking with Space-time Memory Networks [42.06375415765325]
Existing trackers with template updating mechanisms rely on time-consuming numerical optimization and complex hand-designed strategies to achieve competitive performance.
We propose a novel tracking framework built on top of a space-time memory network that is competent to make full use of historical information related to the target.
Specifically, a novel memory mechanism is introduced, which stores the historical information of the target to guide the tracker to focus on the most informative regions in the current frame.
arXiv Detail & Related papers (2021-04-01T08:10:56Z) - Probabilistic Tracklet Scoring and Inpainting for Multiple Object
Tracking [83.75789829291475]
We introduce a probabilistic autoregressive motion model to score tracklet proposals.
This is achieved by training our model to learn the underlying distribution of natural tracklets.
Our experiments demonstrate the superiority of our approach at tracking objects in challenging sequences.
arXiv Detail & Related papers (2020-12-03T23:59:27Z) - SFTrack++: A Fast Learnable Spectral Segmentation Approach for
Space-Time Consistent Tracking [6.294759639481189]
We propose an object tracking method, SFTrack++, that learns to preserve the tracked object consistency over space and time dimensions.
We test our method, SFTrack++, on five tracking benchmarks: OTB, UAV, NFS, GOT-10k, and TrackingNet, using five top trackers as input.
arXiv Detail & Related papers (2020-11-27T17:15:20Z) - Transparent Object Tracking Benchmark [58.19532269423211]
Transparent Object Tracking Benchmark consists of 225 videos (86K frames) from 15 diverse transparent object categories.
To the best of our knowledge, TOTB is the first benchmark dedicated to transparent object tracking.
To encourage future research, we introduce a novel tracker, named TransATOM, which leverages transparency features for tracking.
arXiv Detail & Related papers (2020-11-21T21:39:43Z) - TAO: A Large-Scale Benchmark for Tracking Any Object [95.87310116010185]
Tracking Any Object dataset consists of 2,907 high resolution videos, captured in diverse environments, which are half a minute long on average.
We ask annotators to label objects that move at any point in the video, and give names to them post factum.
Our vocabulary is both significantly larger and qualitatively different from existing tracking datasets.
arXiv Detail & Related papers (2020-05-20T21:07:28Z) - Tracking Objects as Points [83.9217787335878]
We present a simultaneous detection and tracking algorithm that is simpler, faster, and more accurate than the state of the art.
Our tracker, CenterTrack, applies a detection model to a pair of images and detections from the prior frame.
CenterTrack is simple, online (no peeking into the future), and real-time.
arXiv Detail & Related papers (2020-04-02T17:58:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.