VastTrack: Vast Category Visual Object Tracking
- URL: http://arxiv.org/abs/2403.03493v1
- Date: Wed, 6 Mar 2024 06:39:43 GMT
- Title: VastTrack: Vast Category Visual Object Tracking
- Authors: Liang Peng, Junyuan Gao, Xinran Liu, Weihong Li, Shaohua Dong, Zhipeng
Zhang, Heng Fan, Libo Zhang
- Abstract summary: We introduce a novel benchmark, dubbed VastTrack, towards facilitating the development of more general visual tracking.
VastTrack covers target objects from 2,115 classes, largely surpassing object categories of existing popular benchmarks.
VastTrack offers 50,610 sequences with 4.2 million frames, which makes it to date the largest benchmark regarding the number of videos.
- Score: 39.61339408722333
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce a novel benchmark, dubbed VastTrack, towards
facilitating the development of more general visual tracking via encompassing
abundant classes and videos. VastTrack possesses several attractive properties:
(1) Vast Object Category. In particular, it covers target objects from 2,115
classes, largely surpassing object categories of existing popular benchmarks
(e.g., GOT-10k with 563 classes and LaSOT with 70 categories). With such vast
object classes, we expect to learn more general object tracking. (2) Larger
scale. Compared with current benchmarks, VastTrack offers 50,610 sequences with
4.2 million frames, which makes it to date the largest benchmark regarding the
number of videos, and thus could benefit training even more powerful visual
trackers in the deep learning era. (3) Rich Annotation. Besides conventional
bounding box annotations, VastTrack also provides linguistic descriptions for
the videos. The rich annotations of VastTrack enables development of both the
vision-only and the vision-language tracking. To ensure precise annotation, all
videos are manually labeled with multiple rounds of careful inspection and
refinement. To understand performance of existing trackers and to provide
baselines for future comparison, we extensively assess 25 representative
trackers. The results, not surprisingly, show significant drops compared to
those on current datasets due to lack of abundant categories and videos from
diverse scenarios for training, and more efforts are required to improve
general tracking. Our VastTrack and all the evaluation results will be made
publicly available https://github.com/HengLan/VastTrack.
Related papers
- Tracking Reflected Objects: A Benchmark [12.770787846444406]
We introduce TRO, a benchmark specifically for Tracking Reflected Objects.
TRO includes 200 sequences with around 70,000 frames, each carefully annotated with bounding boxes.
To provide a stronger baseline, we propose a new tracker, HiP-HaTrack, which uses hierarchical features to improve performance.
arXiv Detail & Related papers (2024-07-07T02:22:45Z) - Tracking with Human-Intent Reasoning [64.69229729784008]
This work proposes a new tracking task -- Instruction Tracking.
It involves providing implicit tracking instructions that require the trackers to perform tracking automatically in video frames.
TrackGPT is capable of performing complex reasoning-based tracking.
arXiv Detail & Related papers (2023-12-29T03:22:18Z) - OVTrack: Open-Vocabulary Multiple Object Tracking [64.73379741435255]
OVTrack is an open-vocabulary tracker capable of tracking arbitrary object classes.
It sets a new state-of-the-art on the large-scale, large-vocabulary TAO benchmark.
arXiv Detail & Related papers (2023-04-17T16:20:05Z) - Cannot See the Forest for the Trees: Aggregating Multiple Viewpoints to
Better Classify Objects in Videos [36.28269135795851]
We present a set classifier that improves accuracy of classifying tracklets by aggregating information from multiple viewpoints contained in a tracklet.
By simply attaching our method to QDTrack on top of ResNet-101, we achieve the new state-of-the-art, 19.9% and 15.7% TrackAP_50 on TAO validation and test sets.
arXiv Detail & Related papers (2022-06-05T07:51:58Z) - DanceTrack: Multi-Object Tracking in Uniform Appearance and Diverse
Motion [56.1428110894411]
We propose a large-scale dataset for multi-human tracking, where humans have similar appearance, diverse motion and extreme articulation.
As the dataset contains mostly group dancing videos, we name it "DanceTrack"
We benchmark several state-of-the-art trackers on our dataset and observe a significant performance drop on DanceTrack when compared against existing benchmarks.
arXiv Detail & Related papers (2021-11-29T16:49:06Z) - LaSOT: A High-quality Large-scale Single Object Tracking Benchmark [67.96196486540497]
We present LaSOT, a high-quality Large-scale Single Object Tracking benchmark.
LaSOT contains a diverse selection of 85 object classes, and offers 1,550 totaling more than 3.87 million frames.
Each video frame is carefully and manually annotated with a bounding box. This makes LaSOT, to our knowledge, the largest densely annotated tracking benchmark.
arXiv Detail & Related papers (2020-09-08T00:31:56Z) - TAO: A Large-Scale Benchmark for Tracking Any Object [95.87310116010185]
Tracking Any Object dataset consists of 2,907 high resolution videos, captured in diverse environments, which are half a minute long on average.
We ask annotators to label objects that move at any point in the video, and give names to them post factum.
Our vocabulary is both significantly larger and qualitatively different from existing tracking datasets.
arXiv Detail & Related papers (2020-05-20T21:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.