MotionTrack: Learning Robust Short-term and Long-term Motions for
Multi-Object Tracking
- URL: http://arxiv.org/abs/2303.10404v2
- Date: Mon, 17 Apr 2023 03:39:11 GMT
- Title: MotionTrack: Learning Robust Short-term and Long-term Motions for
Multi-Object Tracking
- Authors: Zheng Qin and Sanping Zhou and Le Wang and Jinghai Duan and Gang Hua
and Wei Tang
- Abstract summary: We propose MotionTrack, which learns robust short-term and long-term motions in a unified framework to associate trajectories from a short to long range.
For dense crowds, we design a novel Interaction Module to learn interaction-aware motions from short-term trajectories, which can estimate the complex movement of each target.
For extreme occlusions, we build a novel Refind Module to learn reliable long-term motions from the target's history trajectory, which can link the interrupted trajectory with its corresponding detection.
- Score: 56.92165669843006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The main challenge of Multi-Object Tracking~(MOT) lies in maintaining a
continuous trajectory for each target. Existing methods often learn reliable
motion patterns to match the same target between adjacent frames and
discriminative appearance features to re-identify the lost targets after a long
period. However, the reliability of motion prediction and the discriminability
of appearances can be easily hurt by dense crowds and extreme occlusions in the
tracking process. In this paper, we propose a simple yet effective multi-object
tracker, i.e., MotionTrack, which learns robust short-term and long-term
motions in a unified framework to associate trajectories from a short to long
range. For dense crowds, we design a novel Interaction Module to learn
interaction-aware motions from short-term trajectories, which can estimate the
complex movement of each target. For extreme occlusions, we build a novel
Refind Module to learn reliable long-term motions from the target's history
trajectory, which can link the interrupted trajectory with its corresponding
detection. Our Interaction Module and Refind Module are embedded in the
well-known tracking-by-detection paradigm, which can work in tandem to maintain
superior performance. Extensive experimental results on MOT17 and MOT20
datasets demonstrate the superiority of our approach in challenging scenarios,
and it achieves state-of-the-art performances at various MOT metrics.
Related papers
- ETTrack: Enhanced Temporal Motion Predictor for Multi-Object Tracking [4.250337979548885]
We propose a motion-based MOT approach with an enhanced temporal motion predictor, ETTrack.
Specifically, the motion predictor integrates a transformer model and a Temporal Convolutional Network (TCN) to capture short-term and long-term motion patterns.
We show ETTrack achieves a competitive performance compared with state-of-the-art trackers on DanceTrack and SportsMOT.
arXiv Detail & Related papers (2024-05-24T17:51:33Z) - Single-Shot and Multi-Shot Feature Learning for Multi-Object Tracking [55.13878429987136]
We propose a simple yet effective two-stage feature learning paradigm to jointly learn single-shot and multi-shot features for different targets.
Our method has achieved significant improvements on MOT17 and MOT20 datasets while reaching state-of-the-art performance on DanceTrack dataset.
arXiv Detail & Related papers (2023-11-17T08:17:49Z) - TrajectoryFormer: 3D Object Tracking Transformer with Predictive
Trajectory Hypotheses [51.60422927416087]
3D multi-object tracking (MOT) is vital for many applications including autonomous driving vehicles and service robots.
We present TrajectoryFormer, a novel point-cloud-based 3D MOT framework.
arXiv Detail & Related papers (2023-06-09T13:31:50Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - MONCE Tracking Metrics: a comprehensive quantitative performance
evaluation methodology for object tracking [0.0]
We propose a suite of MONCE (Multi-Object Non-Contiguous Entities) image tracking metrics that provide both objective tracking model performance benchmarks as well as diagnostic insight for driving tracking model development.
arXiv Detail & Related papers (2022-04-11T17:32:03Z) - Distractor-Aware Fast Tracking via Dynamic Convolutions and MOT
Philosophy [63.91005999481061]
A practical long-term tracker typically contains three key properties, i.e. an efficient model design, an effective global re-detection strategy and a robust distractor awareness mechanism.
We propose a two-task tracking frame work (named DMTrack) to achieve distractor-aware fast tracking via Dynamic convolutions (d-convs) and Multiple object tracking (MOT) philosophy.
Our tracker achieves state-of-the-art performance on the LaSOT, OxUvA, TLP, VOT2018LT and VOT 2019LT benchmarks and runs in real-time (3x faster
arXiv Detail & Related papers (2021-04-25T00:59:53Z) - Probabilistic Tracklet Scoring and Inpainting for Multiple Object
Tracking [83.75789829291475]
We introduce a probabilistic autoregressive motion model to score tracklet proposals.
This is achieved by training our model to learn the underlying distribution of natural tracklets.
Our experiments demonstrate the superiority of our approach at tracking objects in challenging sequences.
arXiv Detail & Related papers (2020-12-03T23:59:27Z) - MAT: Motion-Aware Multi-Object Tracking [9.098793914779161]
In this paper, we propose Motion-Aware Tracker (MAT), focusing more on various motion patterns of different objects.
Experiments on MOT16 and MOT17 challenging benchmarks demonstrate that our MAT approach can achieve the superior performance by a large margin.
arXiv Detail & Related papers (2020-09-10T11:51:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.