Chained-Tracker: Chaining Paired Attentive Regression Results for
End-to-End Joint Multiple-Object Detection and Tracking
- URL: http://arxiv.org/abs/2007.14557v1
- Date: Wed, 29 Jul 2020 02:38:49 GMT
- Title: Chained-Tracker: Chaining Paired Attentive Regression Results for
End-to-End Joint Multiple-Object Detection and Tracking
- Authors: Jinlong Peng, Changan Wang, Fangbin Wan, Yang Wu, Yabiao Wang, Ying
Tai, Chengjie Wang, Jilin Li, Feiyue Huang, Yanwei Fu
- Abstract summary: We propose a simple online model named Chained-Tracker (CTracker), which naturally integrates all the three subtasks into an end-to-end solution.
The two major novelties: chained structure and paired attentive regression, make CTracker simple, fast and effective.
- Score: 102.31092931373232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing Multiple-Object Tracking (MOT) methods either follow the
tracking-by-detection paradigm to conduct object detection, feature extraction
and data association separately, or have two of the three subtasks integrated
to form a partially end-to-end solution. Going beyond these sub-optimal
frameworks, we propose a simple online model named Chained-Tracker (CTracker),
which naturally integrates all the three subtasks into an end-to-end solution
(the first as far as we know). It chains paired bounding boxes regression
results estimated from overlapping nodes, of which each node covers two
adjacent frames. The paired regression is made attentive by object-attention
(brought by a detection module) and identity-attention (ensured by an ID
verification module). The two major novelties: chained structure and paired
attentive regression, make CTracker simple, fast and effective, setting new
MOTA records on MOT16 and MOT17 challenge datasets (67.6 and 66.6,
respectively), without relying on any extra training data. The source code of
CTracker can be found at: github.com/pjl1995/CTracker.
Related papers
- ADA-Track: End-to-End Multi-Camera 3D Multi-Object Tracking with Alternating Detection and Association [15.161640917854363]
We introduce ADA-Track, a novel end-to-end framework for 3D MOT from multi-view cameras.
We introduce a learnable data association module based on edge-augmented cross-attention.
We integrate this association module into the decoder layer of a DETR-based 3D detector.
arXiv Detail & Related papers (2024-05-14T19:02:33Z) - Unified Sequence-to-Sequence Learning for Single- and Multi-Modal Visual Object Tracking [64.28025685503376]
SeqTrack casts visual tracking as a sequence generation task, forecasting object bounding boxes in an autoregressive manner.
SeqTrackv2 integrates a unified interface for auxiliary modalities and a set of task-prompt tokens to specify the task.
This sequence learning paradigm not only simplifies the tracking framework, but also showcases superior performance across 14 challenging benchmarks.
arXiv Detail & Related papers (2023-04-27T17:56:29Z) - You Only Need Two Detectors to Achieve Multi-Modal 3D Multi-Object Tracking [9.20064374262956]
The proposed framework can achieve robust tracking by using only a 2D detector and a 3D detector.
It is proven more accurate than many of the state-of-the-art TBD-based multi-modal tracking methods.
arXiv Detail & Related papers (2023-04-18T02:45:18Z) - ByteTrackV2: 2D and 3D Multi-Object Tracking by Associating Every
Detection Box [81.45219802386444]
Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects across video frames.
We propose a hierarchical data association strategy to mine the true objects in low-score detection boxes.
In 3D scenarios, it is much easier for the tracker to predict object velocities in the world coordinate.
arXiv Detail & Related papers (2023-03-27T15:35:21Z) - 3DMODT: Attention-Guided Affinities for Joint Detection & Tracking in 3D
Point Clouds [95.54285993019843]
We propose a method for joint detection and tracking of multiple objects in 3D point clouds.
Our model exploits temporal information employing multiple frames to detect objects and track them in a single network.
arXiv Detail & Related papers (2022-11-01T20:59:38Z) - Unified Transformer Tracker for Object Tracking [58.65901124158068]
We present the Unified Transformer Tracker (UTT) to address tracking problems in different scenarios with one paradigm.
A track transformer is developed in our UTT to track the target in both Single Object Tracking (SOT) and Multiple Object Tracking (MOT)
arXiv Detail & Related papers (2022-03-29T01:38:49Z) - Exploring Simple 3D Multi-Object Tracking for Autonomous Driving [10.921208239968827]
3D multi-object tracking in LiDAR point clouds is a key ingredient for self-driving vehicles.
Existing methods are predominantly based on the tracking-by-detection pipeline and inevitably require a matching step for the detection association.
We present SimTrack to simplify the hand-crafted tracking paradigm by proposing an end-to-end trainable model for joint detection and tracking from raw point clouds.
arXiv Detail & Related papers (2021-08-23T17:59:22Z) - Global Correlation Network: End-to-End Joint Multi-Object Detection and
Tracking [2.749204052800622]
We present a novel network to realize joint multi-object detection and tracking in an end-to-end way, called Global Correlation Network (GCNet)
GCNet introduces the global correlation layer for regression of absolute size and coordinates of bounding boxes instead of offsets prediction.
The pipeline of detection and tracking by GCNet is conceptually simple, which does not need non-maximum suppression, data association, and other complicated tracking strategies.
arXiv Detail & Related papers (2021-03-23T13:16:42Z) - Track to Detect and Segment: An Online Multi-Object Tracker [81.15608245513208]
TraDeS is an online joint detection and tracking model, exploiting tracking clues to assist detection end-to-end.
TraDeS infers object tracking offset by a cost volume, which is used to propagate previous object features.
arXiv Detail & Related papers (2021-03-16T02:34:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.