Simultaneous Detection and Tracking with Motion Modelling for Multiple
Object Tracking
- URL: http://arxiv.org/abs/2008.08826v1
- Date: Thu, 20 Aug 2020 08:05:33 GMT
- Title: Simultaneous Detection and Tracking with Motion Modelling for Multiple
Object Tracking
- Authors: ShiJie Sun, Naveed Akhtar, XiangYu Song, HuanSheng Song, Ajmal Mian,
Mubarak Shah
- Abstract summary: We introduce Deep Motion Modeling Network (DMM-Net) that can estimate multiple objects' motion parameters to perform joint detection and association.
DMM-Net achieves PR-MOTA score of 12.80 @ 120+ fps for the popular UA-DETRAC challenge, which is better performance and orders of magnitude faster.
We also contribute a synthetic large-scale public dataset Omni-MOT for vehicle tracking that provides precise ground-truth annotations.
- Score: 94.24393546459424
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning-based Multiple Object Tracking (MOT) currently relies on
off-the-shelf detectors for tracking-by-detection.This results in deep models
that are detector biased and evaluations that are detector influenced. To
resolve this issue, we introduce Deep Motion Modeling Network (DMM-Net) that
can estimate multiple objects' motion parameters to perform joint detection and
association in an end-to-end manner. DMM-Net models object features over
multiple frames and simultaneously infers object classes, visibility, and their
motion parameters. These outputs are readily used to update the tracklets for
efficient MOT. DMM-Net achieves PR-MOTA score of 12.80 @ 120+ fps for the
popular UA-DETRAC challenge, which is better performance and orders of
magnitude faster. We also contribute a synthetic large-scale public dataset
Omni-MOT for vehicle tracking that provides precise ground-truth annotations to
eliminate the detector influence in MOT evaluation. This 14M+ frames dataset is
extendable with our public script (Code at Dataset
<https://github.com/shijieS/OmniMOTDataset>, Dataset Recorder
<https://github.com/shijieS/OMOTDRecorder>, Omni-MOT Source
<https://github.com/shijieS/DMMN>). We demonstrate the suitability of Omni-MOT
for deep learning with DMMNet and also make the source code of our network
public.
Related papers
- MCTrack: A Unified 3D Multi-Object Tracking Framework for Autonomous Driving [10.399817864597347]
This paper introduces MCTrack, a new 3D multi-object tracking method that achieves state-of-the-art (SOTA) performance across KITTI, nuScenes, and datasets.
arXiv Detail & Related papers (2024-09-23T11:26:01Z) - MotionTrack: End-to-End Transformer-based Multi-Object Tracing with
LiDAR-Camera Fusion [13.125168307241765]
We propose an end-to-end transformer-based MOT algorithm (MotionTrack) with multi-modality sensor inputs to track objects with multiple classes.
The MotionTrack and its variations achieve better results (AMOTA score at 0.55) on the nuScenes dataset compared with other classical baseline models.
arXiv Detail & Related papers (2023-06-29T15:00:12Z) - TrajectoryFormer: 3D Object Tracking Transformer with Predictive
Trajectory Hypotheses [51.60422927416087]
3D multi-object tracking (MOT) is vital for many applications including autonomous driving vehicles and service robots.
We present TrajectoryFormer, a novel point-cloud-based 3D MOT framework.
arXiv Detail & Related papers (2023-06-09T13:31:50Z) - You Only Need Two Detectors to Achieve Multi-Modal 3D Multi-Object Tracking [9.20064374262956]
The proposed framework can achieve robust tracking by using only a 2D detector and a 3D detector.
It is proven more accurate than many of the state-of-the-art TBD-based multi-modal tracking methods.
arXiv Detail & Related papers (2023-04-18T02:45:18Z) - Minkowski Tracker: A Sparse Spatio-Temporal R-CNN for Joint Object
Detection and Tracking [53.64390261936975]
We present Minkowski Tracker, a sparse-temporal R-CNN that jointly solves object detection and tracking problems.
Inspired by region-based CNN (R-CNN), we propose to track motion as a second stage of the object detector R-CNN.
We show in large-scale experiments that the overall performance gain of our method is due to four factors.
arXiv Detail & Related papers (2022-08-22T04:47:40Z) - SOMPT22: A Surveillance Oriented Multi-Pedestrian Tracking Dataset [5.962184741057505]
We introduce SOMPT22 dataset; a new set for multi person tracking with annotated short videos captured from static cameras located on poles with 6-8 meters in height positioned for city surveillance.
We analyze MOT trackers classified as one-shot and two-stage with respect to the way of use of detection and reID networks on this new dataset.
The experimental results of our new dataset indicate that SOTA is still far from high efficiency, and single-shot trackers are good candidates to unify fast execution and accuracy with competitive performance.
arXiv Detail & Related papers (2022-08-04T11:09:19Z) - Track to Detect and Segment: An Online Multi-Object Tracker [81.15608245513208]
TraDeS is an online joint detection and tracking model, exploiting tracking clues to assist detection end-to-end.
TraDeS infers object tracking offset by a cost volume, which is used to propagate previous object features.
arXiv Detail & Related papers (2021-03-16T02:34:06Z) - SoDA: Multi-Object Tracking with Soft Data Association [75.39833486073597]
Multi-object tracking (MOT) is a prerequisite for a safe deployment of self-driving cars.
We propose a novel approach to MOT that uses attention to compute track embeddings that encode dependencies between observed objects.
arXiv Detail & Related papers (2020-08-18T03:40:25Z) - Joint Object Detection and Multi-Object Tracking with Graph Neural
Networks [32.1359455541169]
We propose a new instance of joint MOT approach based on Graph Neural Networks (GNNs)
We show the effectiveness of our GNN-based joint MOT approach and show state-of-the-art performance for both detection and MOT tasks.
arXiv Detail & Related papers (2020-06-23T17:07:00Z) - ArTIST: Autoregressive Trajectory Inpainting and Scoring for Tracking [80.02322563402758]
One of the core components in online multiple object tracking (MOT) frameworks is associating new detections with existing tracklets.
We introduce a probabilistic autoregressive generative model to score tracklet proposals by directly measuring the likelihood that a tracklet represents natural motion.
arXiv Detail & Related papers (2020-04-16T06:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.