Transparent Object Tracking Benchmark
- URL: http://arxiv.org/abs/2011.10875v2
- Date: Sun, 1 Aug 2021 21:14:37 GMT
- Title: Transparent Object Tracking Benchmark
- Authors: Heng Fan, Halady Akhilesha Miththanthaya, Harshit, Siranjiv Ramana
Rajan, Xiaoqiong Liu, Zhilin Zou, Yuewei Lin, Haibin Ling
- Abstract summary: Transparent Object Tracking Benchmark consists of 225 videos (86K frames) from 15 diverse transparent object categories.
To the best of our knowledge, TOTB is the first benchmark dedicated to transparent object tracking.
To encourage future research, we introduce a novel tracker, named TransATOM, which leverages transparency features for tracking.
- Score: 58.19532269423211
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual tracking has achieved considerable progress in recent years. However,
current research in the field mainly focuses on tracking of opaque objects,
while little attention is paid to transparent object tracking. In this paper,
we make the first attempt in exploring this problem by proposing a Transparent
Object Tracking Benchmark (TOTB). Specifically, TOTB consists of 225 videos
(86K frames) from 15 diverse transparent object categories. Each sequence is
manually labeled with axis-aligned bounding boxes. To the best of our
knowledge, TOTB is the first benchmark dedicated to transparent object
tracking. In order to understand how existing trackers perform and to provide
comparison for future research on TOTB, we extensively evaluate 25
state-of-the-art tracking algorithms. The evaluation results exhibit that more
efforts are needed to improve transparent object tracking. Besides, we observe
some nontrivial findings from the evaluation that are discrepant with some
common beliefs in opaque object tracking. For example, we find that deeper
features are not always good for improvements. Moreover, to encourage future
research, we introduce a novel tracker, named TransATOM, which leverages
transparency features for tracking and surpasses all 25 evaluated approaches by
a large margin. By releasing TOTB, we expect to facilitate future research and
application of transparent object tracking in both the academia and industry.
The TOTB and evaluation results as well as TransATOM are available at
https://hengfan2010.github.io/projects/TOTB.
Related papers
- Tracking Reflected Objects: A Benchmark [12.770787846444406]
We introduce TRO, a benchmark specifically for Tracking Reflected Objects.
TRO includes 200 sequences with around 70,000 frames, each carefully annotated with bounding boxes.
To provide a stronger baseline, we propose a new tracker, HiP-HaTrack, which uses hierarchical features to improve performance.
arXiv Detail & Related papers (2024-07-07T02:22:45Z) - A New Dataset and a Distractor-Aware Architecture for Transparent Object
Tracking [34.08943612955157]
Performance of modern trackers degrades substantially on transparent objects compared to opaque objects.
We propose the first transparent object tracking training dataset Trans2k that consists of over 2k sequences with 104,343 images overall.
We also present a new distractor-aware transparent object tracker (DiTra) that treats localization accuracy and target identification as separate tasks.
arXiv Detail & Related papers (2024-01-08T13:04:28Z) - Tracking with Human-Intent Reasoning [64.69229729784008]
This work proposes a new tracking task -- Instruction Tracking.
It involves providing implicit tracking instructions that require the trackers to perform tracking automatically in video frames.
TrackGPT is capable of performing complex reasoning-based tracking.
arXiv Detail & Related papers (2023-12-29T03:22:18Z) - Transparent Object Tracking with Enhanced Fusion Module [56.403878717170784]
We propose a new tracker architecture that uses our fusion techniques to achieve superior results for transparent object tracking.
Our results and the implementation of code will be made publicly available at https://github.com/kalyan05TOTEM.
arXiv Detail & Related papers (2023-09-13T03:52:09Z) - OmniTracker: Unifying Object Tracking by Tracking-with-Detection [119.51012668709502]
OmniTracker is presented to resolve all the tracking tasks with a fully shared network architecture, model weights, and inference pipeline.
Experiments on 7 tracking datasets, including LaSOT, TrackingNet, DAVIS16-17, MOT17, MOTS20, and YTVIS19, demonstrate that OmniTracker achieves on-par or even better results than both task-specific and unified tracking models.
arXiv Detail & Related papers (2023-03-21T17:59:57Z) - Trans2k: Unlocking the Power of Deep Models for Transparent Object
Tracking [41.039837388154]
We propose the first transparent object tracking training dataset Trans2k that consists of over 2k sequences with 104,343 images overall.
We quantify domain-specific attributes and render the dataset containing visual attributes and tracking situations not covered in the existing object training datasets.
The dataset and the rendering engine will be publicly released to unlock the power of modern learning-based trackers and foster new designs in transparent object tracking.
arXiv Detail & Related papers (2022-10-07T10:08:13Z) - Tracking Small and Fast Moving Objects: A Benchmark [0.1679937788852769]
We present TSFMO, a benchmark for textbfTracking textbfSmall and textbfFast textbfMoving textbfObjects.
To the best of our knowledge, TSFMO is the first benchmark dedicated to tracking small and fast moving objects, especially connected to sports.
To encourage future research, we proposed a novel tracker S-KeepTrack which surpasses all 20 evaluated approaches.
arXiv Detail & Related papers (2022-09-09T13:14:44Z) - MOTChallenge: A Benchmark for Single-Camera Multiple Target Tracking [72.76685780516371]
We present MOTChallenge, a benchmark for single-camera Multiple Object Tracking (MOT)
The benchmark is focused on multiple people tracking, since pedestrians are by far the most studied object in the tracking community.
We provide a categorization of state-of-the-art trackers and a broad error analysis.
arXiv Detail & Related papers (2020-10-15T06:52:16Z) - TAO: A Large-Scale Benchmark for Tracking Any Object [95.87310116010185]
Tracking Any Object dataset consists of 2,907 high resolution videos, captured in diverse environments, which are half a minute long on average.
We ask annotators to label objects that move at any point in the video, and give names to them post factum.
Our vocabulary is both significantly larger and qualitatively different from existing tracking datasets.
arXiv Detail & Related papers (2020-05-20T21:07:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.