ARKitTrack: A New Diverse Dataset for Tracking Using Mobile RGB-D Data
- URL: http://arxiv.org/abs/2303.13885v1
- Date: Fri, 24 Mar 2023 09:51:13 GMT
- Title: ARKitTrack: A New Diverse Dataset for Tracking Using Mobile RGB-D Data
- Authors: Haojie Zhao and Junsong Chen and Lijun Wang and Huchuan Lu
- Abstract summary: We propose a new RGB-D tracking dataset for both static and dynamic scenes captured by consumer-grade LiDAR scanners equipped on Apple's iPhone and iPad.
ARKitTrack contains 300 RGB-D sequences, 455 targets, and 229.7K video frames in total.
In-depth empirical analysis has verified that the ARKitTrack dataset can significantly facilitate RGB-D tracking and that the proposed baseline method compares favorably against the state of the arts.
- Score: 75.73063721067608
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Compared with traditional RGB-only visual tracking, few datasets have been
constructed for RGB-D tracking. In this paper, we propose ARKitTrack, a new
RGB-D tracking dataset for both static and dynamic scenes captured by
consumer-grade LiDAR scanners equipped on Apple's iPhone and iPad. ARKitTrack
contains 300 RGB-D sequences, 455 targets, and 229.7K video frames in total.
Along with the bounding box annotations and frame-level attributes, we also
annotate this dataset with 123.9K pixel-level target masks. Besides, the camera
intrinsic and camera pose of each frame are provided for future developments.
To demonstrate the potential usefulness of this dataset, we further present a
unified baseline for both box-level and pixel-level tracking, which integrates
RGB features with bird's-eye-view representations to better explore
cross-modality 3D geometry. In-depth empirical analysis has verified that the
ARKitTrack dataset can significantly facilitate RGB-D tracking and that the
proposed baseline method compares favorably against the state of the arts. The
code and dataset is available at https://arkittrack.github.io.
Related papers
- ViDSOD-100: A New Dataset and a Baseline Model for RGB-D Video Salient Object Detection [51.16181295385818]
We first collect an annotated RGB-D video SODOD (DSOD-100) dataset, which contains 100 videos within a total of 9,362 frames.
All the frames in each video are manually annotated to a high-quality saliency annotation.
We propose a new baseline model, named attentive triple-fusion network (ATF-Net) for RGB-D salient object detection.
arXiv Detail & Related papers (2024-06-18T12:09:43Z) - RGB-Sonar Tracking Benchmark and Spatial Cross-Attention Transformer Tracker [4.235252053339947]
This paper introduces a new challenging RGB-Sonar (RGB-S) tracking task.
It investigates how to achieve efficient tracking of an underwater target through the interaction of RGB and sonar modalities.
arXiv Detail & Related papers (2024-06-11T12:01:11Z) - CRSOT: Cross-Resolution Object Tracking using Unaligned Frame and Event
Cameras [43.699819213559515]
Existing datasets for RGB-DVS tracking are collected with DVS346 camera and their resolution ($346 times 260$) is low for practical applications.
We build the first unaligned frame-event dataset CRSOT collected with a specially built data acquisition system.
We propose a novel unaligned object tracking framework that can realize robust tracking even using the loosely aligned RGB-Event data.
arXiv Detail & Related papers (2024-01-05T14:20:22Z) - DIVOTrack: A Novel Dataset and Baseline Method for Cross-View
Multi-Object Tracking in DIVerse Open Scenes [74.64897845999677]
We introduce a new cross-view multi-object tracking dataset for DIVerse Open scenes with dense tracking pedestrians.
Our DIVOTrack has fifteen distinct scenarios and 953 cross-view tracks, surpassing all cross-view multi-object tracking datasets currently available.
Furthermore, we provide a novel baseline cross-view tracking method with a unified joint detection and cross-view tracking framework named CrossMOT.
arXiv Detail & Related papers (2023-02-15T14:10:42Z) - Revisiting Color-Event based Tracking: A Unified Network, Dataset, and
Metric [53.88188265943762]
We propose a single-stage backbone network for Color-Event Unified Tracking (CEUTrack), which achieves the above functions simultaneously.
Our proposed CEUTrack is simple, effective, and efficient, which achieves over 75 FPS and new SOTA performance.
arXiv Detail & Related papers (2022-11-20T16:01:31Z) - RGBD1K: A Large-scale Dataset and Benchmark for RGB-D Object Tracking [30.448658049744775]
Given a limited amount of annotated RGB-D tracking data, most state-of-the-art RGB-D trackers are simple extensions of high-performance RGB-only trackers.
To address the dataset deficiency issue, a new RGB-D dataset named RGBD1K is released in this paper.
arXiv Detail & Related papers (2022-08-21T03:07:36Z) - Visible-Thermal UAV Tracking: A Large-Scale Benchmark and New Baseline [80.13652104204691]
In this paper, we construct a large-scale benchmark with high diversity for visible-thermal UAV tracking (VTUAV)
We provide a coarse-to-fine attribute annotation, where frame-level attributes are provided to exploit the potential of challenge-specific trackers.
In addition, we design a new RGB-T baseline, named Hierarchical Multi-modal Fusion Tracker (HMFT), which fuses RGB-T data in various levels.
arXiv Detail & Related papers (2022-04-08T15:22:33Z) - RGBD Object Tracking: An In-depth Review [89.96221353160831]
We firstly review RGBD object trackers from different perspectives, including RGBD fusion, depth usage, and tracking framework.
We benchmark a representative set of RGBD trackers, and give detailed analyses based on their performances.
arXiv Detail & Related papers (2022-03-26T18:53:51Z) - Visual Object Tracking on Multi-modal RGB-D Videos: A Review [16.098468526632473]
The goal of this review is to summarize the relative knowledge of the research filed of RGB-D tracking.
To be specific, we will generalize the related RGB-D tracking benchmarking datasets as well as the corresponding performance measurements.
arXiv Detail & Related papers (2022-01-23T08:02:49Z) - DepthTrack : Unveiling the Power of RGBD Tracking [29.457114656913944]
This work introduces a new RGBD tracking dataset - Depth-Track.
It has twice as many sequences (200) and scene types (40) than in the largest existing dataset.
The average length of the sequences (1473), the number of deformable objects (16) and the number of tracking attributes (15) have been increased.
arXiv Detail & Related papers (2021-08-31T16:42:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.