Trans2k: Unlocking the Power of Deep Models for Transparent Object
Tracking
- URL: http://arxiv.org/abs/2210.03436v1
- Date: Fri, 7 Oct 2022 10:08:13 GMT
- Title: Trans2k: Unlocking the Power of Deep Models for Transparent Object
Tracking
- Authors: Alan Lukezic and Ziga Trojer and Jiri Matas and Matej Kristan
- Abstract summary: We propose the first transparent object tracking training dataset Trans2k that consists of over 2k sequences with 104,343 images overall.
We quantify domain-specific attributes and render the dataset containing visual attributes and tracking situations not covered in the existing object training datasets.
The dataset and the rendering engine will be publicly released to unlock the power of modern learning-based trackers and foster new designs in transparent object tracking.
- Score: 41.039837388154
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Visual object tracking has focused predominantly on opaque objects, while
transparent object tracking received very little attention. Motivated by the
uniqueness of transparent objects in that their appearance is directly affected
by the background, the first dedicated evaluation dataset has emerged recently.
We contribute to this effort by proposing the first transparent object tracking
training dataset Trans2k that consists of over 2k sequences with 104,343 images
overall, annotated by bounding boxes and segmentation masks. Noting that
transparent objects can be realistically rendered by modern renderers, we
quantify domain-specific attributes and render the dataset containing visual
attributes and tracking situations not covered in the existing object training
datasets. We observe a consistent performance boost (up to 16%) across a
diverse set of modern tracking architectures when trained using Trans2k, and
show insights not previously possible due to the lack of appropriate training
sets. The dataset and the rendering engine will be publicly released to unlock
the power of modern learning-based trackers and foster new designs in
transparent object tracking.
Related papers
- A New Dataset and a Distractor-Aware Architecture for Transparent Object
Tracking [34.08943612955157]
Performance of modern trackers degrades substantially on transparent objects compared to opaque objects.
We propose the first transparent object tracking training dataset Trans2k that consists of over 2k sequences with 104,343 images overall.
We also present a new distractor-aware transparent object tracker (DiTra) that treats localization accuracy and target identification as separate tasks.
arXiv Detail & Related papers (2024-01-08T13:04:28Z) - Transparent Object Tracking with Enhanced Fusion Module [56.403878717170784]
We propose a new tracker architecture that uses our fusion techniques to achieve superior results for transparent object tracking.
Our results and the implementation of code will be made publicly available at https://github.com/kalyan05TOTEM.
arXiv Detail & Related papers (2023-09-13T03:52:09Z) - TransNet: Transparent Object Manipulation Through Category-Level Pose
Estimation [6.844391823478345]
We propose a two-stage pipeline that estimates category-level transparent object pose using localized depth completion and surface normal estimation.
Results show that TransNet achieves improved pose estimation accuracy on transparent objects.
We use TransNet to build an autonomous transparent object manipulation system for robotic pick-and-place and pouring tasks.
arXiv Detail & Related papers (2023-07-23T18:38:42Z) - Once Detected, Never Lost: Surpassing Human Performance in Offline LiDAR
based 3D Object Detection [50.959453059206446]
This paper aims for high-performance offline LiDAR-based 3D object detection.
We first observe that experienced human annotators annotate objects from a track-centric perspective.
We propose a high-performance offline detector in a track-centric perspective instead of the conventional object-centric perspective.
arXiv Detail & Related papers (2023-04-24T17:59:05Z) - Seeing Glass: Joint Point Cloud and Depth Completion for Transparent
Objects [16.714074893209713]
TranspareNet is a joint point cloud and depth completion method.
It can complete the depth of transparent objects in cluttered and complex scenes.
TranspareNet outperforms existing state-of-the-art depth completion methods on multiple datasets.
arXiv Detail & Related papers (2021-09-30T21:09:09Z) - Learning to Track with Object Permanence [61.36492084090744]
We introduce an end-to-end trainable approach for joint object detection and tracking.
Our model, trained jointly on synthetic and real data, outperforms the state of the art on KITTI, and MOT17 datasets.
arXiv Detail & Related papers (2021-03-26T04:43:04Z) - Transparent Object Tracking Benchmark [58.19532269423211]
Transparent Object Tracking Benchmark consists of 225 videos (86K frames) from 15 diverse transparent object categories.
To the best of our knowledge, TOTB is the first benchmark dedicated to transparent object tracking.
To encourage future research, we introduce a novel tracker, named TransATOM, which leverages transparency features for tracking.
arXiv Detail & Related papers (2020-11-21T21:39:43Z) - Segmenting Transparent Objects in the Wild [98.80906604285163]
This work proposes a large-scale dataset for transparent object segmentation, named Trans10K, consisting of 10,428 images of real scenarios with carefully manual annotations.
To evaluate the effectiveness of Trans10K, we propose a novel boundary-aware segmentation method, termed TransLab, which exploits boundary as the clue to improve segmentation of transparent objects.
arXiv Detail & Related papers (2020-03-31T04:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.