Track without Appearance: Learn Box and Tracklet Embedding with Local
and Global Motion Patterns for Vehicle Tracking
- URL: http://arxiv.org/abs/2108.06029v1
- Date: Fri, 13 Aug 2021 02:27:09 GMT
- Title: Track without Appearance: Learn Box and Tracklet Embedding with Local
and Global Motion Patterns for Vehicle Tracking
- Authors: Gaoang Wang, Renshu Gu, Zuozhu Liu, Weijie Hu, Mingli Song, Jenq-Neng
Hwang
- Abstract summary: Vehicle tracking is an essential task in the multi-object tracking (MOT) field.
In this paper, we try to explore the significance of motion patterns for vehicle tracking without appearance information.
We propose a novel approach that tackles the association issue for long-term tracking with the exclusive fully-exploited motion information.
- Score: 45.524183249765244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vehicle tracking is an essential task in the multi-object tracking (MOT)
field. A distinct characteristic in vehicle tracking is that the trajectories
of vehicles are fairly smooth in both the world coordinate and the image
coordinate. Hence, models that capture motion consistencies are of high
necessity. However, tracking with the standalone motion-based trackers is quite
challenging because targets could get lost easily due to limited information,
detection error and occlusion. Leveraging appearance information to assist
object re-identification could resolve this challenge to some extent. However,
doing so requires extra computation while appearance information is sensitive
to occlusion as well. In this paper, we try to explore the significance of
motion patterns for vehicle tracking without appearance information. We propose
a novel approach that tackles the association issue for long-term tracking with
the exclusive fully-exploited motion information. We address the tracklet
embedding issue with the proposed reconstruct-to-embed strategy based on deep
graph convolutional neural networks (GCN). Comprehensive experiments on the
KITTI-car tracking dataset and UA-Detrac dataset show that the proposed method,
though without appearance information, could achieve competitive performance
with the state-of-the-art (SOTA) trackers. The source code will be available at
https://github.com/GaoangW/LGMTracker.
Related papers
- DenseTrack: Drone-based Crowd Tracking via Density-aware Motion-appearance Synergy [33.57923199717605]
Drone-based crowd tracking faces difficulties in accurately identifying and monitoring objects from an aerial perspective.
To address these challenges, we present the Density-aware Tracking (DenseTrack) framework.
DenseTrack capitalizes on crowd counting to precisely determine object locations, blending visual and motion cues to improve the tracking of small-scale objects.
arXiv Detail & Related papers (2024-07-24T13:39:07Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - OmniTracker: Unifying Object Tracking by Tracking-with-Detection [119.51012668709502]
OmniTracker is presented to resolve all the tracking tasks with a fully shared network architecture, model weights, and inference pipeline.
Experiments on 7 tracking datasets, including LaSOT, TrackingNet, DAVIS16-17, MOT17, MOTS20, and YTVIS19, demonstrate that OmniTracker achieves on-par or even better results than both task-specific and unified tracking models.
arXiv Detail & Related papers (2023-03-21T17:59:57Z) - CXTrack: Improving 3D Point Cloud Tracking with Contextual Information [59.55870742072618]
3D single object tracking plays an essential role in many applications, such as autonomous driving.
We propose CXTrack, a novel transformer-based network for 3D object tracking.
We show that CXTrack achieves state-of-the-art tracking performance while running at 29 FPS.
arXiv Detail & Related papers (2022-11-12T11:29:01Z) - Vehicle Detection and Tracking From Surveillance Cameras in Urban Scenes [9.54261903220931]
We propose a multi-vehicle detection and tracking system following the tracking-by-detection paradigm.
Our method extends an Intersection-over-Union (IOU)-based tracker with vehicle re-identification features.
We outperform our baseline MOT method on the UA-DETRAC benchmark while maintaining a total processing speed suitable for online use cases.
arXiv Detail & Related papers (2021-09-25T18:21:44Z) - Exploring Simple 3D Multi-Object Tracking for Autonomous Driving [10.921208239968827]
3D multi-object tracking in LiDAR point clouds is a key ingredient for self-driving vehicles.
Existing methods are predominantly based on the tracking-by-detection pipeline and inevitably require a matching step for the detection association.
We present SimTrack to simplify the hand-crafted tracking paradigm by proposing an end-to-end trainable model for joint detection and tracking from raw point clouds.
arXiv Detail & Related papers (2021-08-23T17:59:22Z) - Track to Detect and Segment: An Online Multi-Object Tracker [81.15608245513208]
TraDeS is an online joint detection and tracking model, exploiting tracking clues to assist detection end-to-end.
TraDeS infers object tracking offset by a cost volume, which is used to propagate previous object features.
arXiv Detail & Related papers (2021-03-16T02:34:06Z) - DEFT: Detection Embeddings for Tracking [3.326320568999945]
We propose an efficient joint detection and tracking model named DEFT.
Our approach relies on an appearance-based object matching network jointly-learned with an underlying object detection network.
DEFT has comparable accuracy and speed to the top methods on 2D online tracking leaderboards.
arXiv Detail & Related papers (2021-02-03T20:00:44Z) - Probabilistic 3D Multi-Modal, Multi-Object Tracking for Autonomous
Driving [22.693895321632507]
We propose a probabilistic, multi-modal, multi-object tracking system consisting of different trainable modules.
We show that our method outperforms current state-of-the-art on the NuScenes Tracking dataset.
arXiv Detail & Related papers (2020-12-26T15:00:54Z) - Tracklets Predicting Based Adaptive Graph Tracking [51.352829280902114]
We present an accurate and end-to-end learning framework for multi-object tracking, namely textbfTPAGT.
It re-extracts the features of the tracklets in the current frame based on motion predicting, which is the key to solve the problem of features inconsistent.
arXiv Detail & Related papers (2020-10-18T16:16:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.