Tracking Objects as Points
- URL: http://arxiv.org/abs/2004.01177v2
- Date: Fri, 21 Aug 2020 16:28:05 GMT
- Title: Tracking Objects as Points
- Authors: Xingyi Zhou, Vladlen Koltun, Philipp Kr\"ahenb\"uhl
- Abstract summary: We present a simultaneous detection and tracking algorithm that is simpler, faster, and more accurate than the state of the art.
Our tracker, CenterTrack, applies a detection model to a pair of images and detections from the prior frame.
CenterTrack is simple, online (no peeking into the future), and real-time.
- Score: 83.9217787335878
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Tracking has traditionally been the art of following interest points through
space and time. This changed with the rise of powerful deep networks. Nowadays,
tracking is dominated by pipelines that perform object detection followed by
temporal association, also known as tracking-by-detection. In this paper, we
present a simultaneous detection and tracking algorithm that is simpler,
faster, and more accurate than the state of the art. Our tracker, CenterTrack,
applies a detection model to a pair of images and detections from the prior
frame. Given this minimal input, CenterTrack localizes objects and predicts
their associations with the previous frame. That's it. CenterTrack is simple,
online (no peeking into the future), and real-time. It achieves 67.3% MOTA on
the MOT17 challenge at 22 FPS and 89.4% MOTA on the KITTI tracking benchmark at
15 FPS, setting a new state of the art on both datasets. CenterTrack is easily
extended to monocular 3D tracking by regressing additional 3D attributes. Using
monocular video input, it achieves 28.3% AMOTA@0.2 on the newly released
nuScenes 3D tracking benchmark, substantially outperforming the monocular
baseline on this benchmark while running at 28 FPS.
Related papers
- SeqTrack3D: Exploring Sequence Information for Robust 3D Point Cloud
Tracking [26.405519771454102]
We introduce Sequence-to-Sequence tracking paradigm and a tracker named SeqTrack3D to capture target motion across continuous frames.
This novel method ensures robust tracking by leveraging location priors from historical boxes, even in scenes with sparse points.
Experiments conducted on large-scale datasets show that SeqTrack3D achieves new state-of-the-art performances.
arXiv Detail & Related papers (2024-02-26T02:14:54Z) - STTracker: Spatio-Temporal Tracker for 3D Single Object Tracking [11.901758708579642]
3D single object tracking with point clouds is a critical task in 3D computer vision.
Previous methods usually input the last two frames and use the template point cloud in previous frame and the search area point cloud in the current frame respectively.
arXiv Detail & Related papers (2023-06-30T07:25:11Z) - ByteTrackV2: 2D and 3D Multi-Object Tracking by Associating Every
Detection Box [81.45219802386444]
Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects across video frames.
We propose a hierarchical data association strategy to mine the true objects in low-score detection boxes.
In 3D scenarios, it is much easier for the tracker to predict object velocities in the world coordinate.
arXiv Detail & Related papers (2023-03-27T15:35:21Z) - VariabilityTrack:Multi-Object Tracking with Variable Speed Object
Movement [1.6385815610837167]
Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects in videos.
We propose a variable speed Kalman filter algorithm based on environmental feedback and improve the matching process.
arXiv Detail & Related papers (2022-03-12T12:39:41Z) - A Lightweight and Detector-free 3D Single Object Tracker on Point Clouds [50.54083964183614]
It is non-trivial to perform accurate target-specific detection since the point cloud of objects in raw LiDAR scans is usually sparse and incomplete.
We propose DMT, a Detector-free Motion prediction based 3D Tracking network that totally removes the usage of complicated 3D detectors.
arXiv Detail & Related papers (2022-03-08T17:49:07Z) - ByteTrack: Multi-Object Tracking by Associating Every Detection Box [51.93588012109943]
Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects in videos.
Most methods obtain identities by associating detection boxes whose scores are higher than a threshold.
We present a simple, effective and generic association method, called BYTE, tracking BY associaTing every detection box instead of only the high score ones.
arXiv Detail & Related papers (2021-10-13T17:01:26Z) - Multi-object Tracking with Tracked Object Bounding Box Association [18.539658212171062]
CenterTrack tracking algorithm achieves state-of-the-art tracking performance using a simple detection model and single-frame spatial offsets.
We propose to incorporate a simple tracked object bounding box and overlapping prediction based on the current frame onto the CenterTrack algorithm.
arXiv Detail & Related papers (2021-05-17T14:32:47Z) - Monocular Quasi-Dense 3D Object Tracking [99.51683944057191]
A reliable and accurate 3D tracking framework is essential for predicting future locations of surrounding objects and planning the observer's actions in numerous applications such as autonomous driving.
We propose a framework that can effectively associate moving objects over time and estimate their full 3D bounding box information from a sequence of 2D images captured on a moving platform.
arXiv Detail & Related papers (2021-03-12T15:30:02Z) - Quasi-Dense Similarity Learning for Multiple Object Tracking [82.93471035675299]
We present Quasi-Dense Similarity Learning, which densely samples hundreds of region proposals on a pair of images for contrastive learning.
We can directly combine this similarity learning with existing detection methods to build Quasi-Dense Tracking (QDTrack)
arXiv Detail & Related papers (2020-06-11T17:57:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.