Multi-object Tracking via End-to-end Tracklet Searching and Ranking
- URL: http://arxiv.org/abs/2003.02795v1
- Date: Wed, 4 Mar 2020 18:46:01 GMT
- Title: Multi-object Tracking via End-to-end Tracklet Searching and Ranking
- Authors: Tao Hu, Lichao Huang, Han Shen
- Abstract summary: We propose a novel method for optimizing tracklet consistency by introducing an online, end-to-end tracklet search training process.
With sequence model as appearance encoders of tracklet, our tracker achieves remarkable performance gain from conventional tracklet association baseline.
Our methods have also achieved state-of-the-art in MOT1517 challenge benchmarks using public detection and online settings.
- Score: 11.46601533985954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works in multiple object tracking use sequence model to calculate the
similarity score between the detections and the previous tracklets. However,
the forced exposure to ground-truth in the training stage leads to the
training-inference discrepancy problem, i.e., exposure bias, where association
error could accumulate in the inference and make the trajectories drift. In
this paper, we propose a novel method for optimizing tracklet consistency,
which directly takes the prediction errors into account by introducing an
online, end-to-end tracklet search training process. Notably, our methods
directly optimize the whole tracklet score instead of pairwise affinity. With
sequence model as appearance encoders of tracklet, our tracker achieves
remarkable performance gain from conventional tracklet association baseline.
Our methods have also achieved state-of-the-art in MOT15~17 challenge
benchmarks using public detection and online settings.
Related papers
- Unsupervised Learning of Accurate Siamese Tracking [68.58171095173056]
We present a novel unsupervised tracking framework, in which we can learn temporal correspondence both on the classification branch and regression branch.
Our tracker outperforms preceding unsupervised methods by a substantial margin, performing on par with supervised methods on large-scale datasets such as TrackingNet and LaSOT.
arXiv Detail & Related papers (2022-04-04T13:39:43Z) - Active Learning for Deep Visual Tracking [51.5063680734122]
Convolutional neural networks (CNNs) have been successfully applied to the single target tracking task in recent years.
In this paper, we propose an active learning method for deep visual tracking, which selects and annotates the unlabeled samples to train the deep CNNs model.
Under the guidance of active learning, the tracker based on the trained deep CNNs model can achieve competitive tracking performance while reducing the labeling cost.
arXiv Detail & Related papers (2021-10-17T11:47:56Z) - On the detection-to-track association for online multi-object tracking [30.883165972525347]
We propose a hybrid track association algorithm that models the historical appearance distances of a track with an incremental Gaussian mixture model (IGMM)
Experimental results on three MOT benchmarks confirm that HTA effectively improves the target identification performance with a small compromise to the tracking speed.
arXiv Detail & Related papers (2021-07-01T14:44:12Z) - Probabilistic Tracklet Scoring and Inpainting for Multiple Object
Tracking [83.75789829291475]
We introduce a probabilistic autoregressive motion model to score tracklet proposals.
This is achieved by training our model to learn the underlying distribution of natural tracklets.
Our experiments demonstrate the superiority of our approach at tracking objects in challenging sequences.
arXiv Detail & Related papers (2020-12-03T23:59:27Z) - Tracklets Predicting Based Adaptive Graph Tracking [51.352829280902114]
We present an accurate and end-to-end learning framework for multi-object tracking, namely textbfTPAGT.
It re-extracts the features of the tracklets in the current frame based on motion predicting, which is the key to solve the problem of features inconsistent.
arXiv Detail & Related papers (2020-10-18T16:16:49Z) - Accurate Bounding-box Regression with Distance-IoU Loss for Visual
Tracking [42.81230953342163]
The proposed method achieves competitive tracking accuracy when compared to state-of-the-art trackers.
The target estimation part is trained to predict the DIoU score between the target ground-truth bounding-box and the estimated bounding-box.
We introduce a classification part that is trained online and optimized with a Conjugate-Gradient-based strategy to guarantee real-time tracking speed.
arXiv Detail & Related papers (2020-07-03T11:57:54Z) - ArTIST: Autoregressive Trajectory Inpainting and Scoring for Tracking [80.02322563402758]
One of the core components in online multiple object tracking (MOT) frameworks is associating new detections with existing tracklets.
We introduce a probabilistic autoregressive generative model to score tracklet proposals by directly measuring the likelihood that a tracklet represents natural motion.
arXiv Detail & Related papers (2020-04-16T06:43:11Z) - RetinaTrack: Online Single Stage Joint Detection and Tracking [22.351109024452462]
We focus on the tracking-by-detection paradigm for autonomous driving where both tasks are mission critical.
We propose a conceptually simple and efficient joint model of detection and tracking, called RetinaTrack, which modifies the popular single stage RetinaNet approach.
arXiv Detail & Related papers (2020-03-30T23:46:29Z) - Tracking Road Users using Constraint Programming [79.32806233778511]
We present a constraint programming (CP) approach for the data association phase found in the tracking-by-detection paradigm of the multiple object tracking (MOT) problem.
Our proposed method was tested on a motorized vehicles tracking dataset and produces results that outperform the top methods of the UA-DETRAC benchmark.
arXiv Detail & Related papers (2020-03-10T00:04:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.