Beyond 3D Siamese Tracking: A Motion-Centric Paradigm for 3D Single
Object Tracking in Point Clouds
- URL: http://arxiv.org/abs/2203.01730v1
- Date: Thu, 3 Mar 2022 14:20:10 GMT
- Title: Beyond 3D Siamese Tracking: A Motion-Centric Paradigm for 3D Single
Object Tracking in Point Clouds
- Authors: Chaoda Zheng, Xu Yan, Haiming Zhang, Baoyuan Wang, Shenghui Cheng,
Shuguang Cui, Zhen Li
- Abstract summary: 3D single object tracking in LiDAR point clouds plays a crucial role in autonomous driving.
Current approaches all follow the Siamese paradigm based on appearance matching.
We introduce a motion-centric paradigm to handle 3D SOT from a new perspective.
- Score: 39.41305358466479
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D single object tracking (3D SOT) in LiDAR point clouds plays a crucial role
in autonomous driving. Current approaches all follow the Siamese paradigm based
on appearance matching. However, LiDAR point clouds are usually textureless and
incomplete, which hinders effective appearance matching. Besides, previous
methods greatly overlook the critical motion clues among targets. In this work,
beyond 3D Siamese tracking, we introduce a motion-centric paradigm to handle 3D
SOT from a new perspective. Following this paradigm, we propose a matching-free
two-stage tracker M^2-Track. At the 1^st-stage, M^2-Track localizes the target
within successive frames via motion transformation. Then it refines the target
box through motion-assisted shape completion at the 2^nd-stage. Extensive
experiments confirm that M^2-Track significantly outperforms previous
state-of-the-arts on three large-scale datasets while running at 57FPS (~8%,
~17%, and ~22%) precision gains on KITTI, NuScenes, and Waymo Open Dataset
respectively). Further analysis verifies each component's effectiveness and
shows the motion-centric paradigm's promising potential when combined with
appearance matching.
Related papers
- SeqTrack3D: Exploring Sequence Information for Robust 3D Point Cloud
Tracking [26.405519771454102]
We introduce Sequence-to-Sequence tracking paradigm and a tracker named SeqTrack3D to capture target motion across continuous frames.
This novel method ensures robust tracking by leveraging location priors from historical boxes, even in scenes with sparse points.
Experiments conducted on large-scale datasets show that SeqTrack3D achieves new state-of-the-art performances.
arXiv Detail & Related papers (2024-02-26T02:14:54Z) - Motion-to-Matching: A Mixed Paradigm for 3D Single Object Tracking [27.805298263103495]
We propose MTM-Tracker, which combines motion modeling with feature matching into a single network.
In the first stage, we exploit the continuous historical boxes as motion prior and propose an encoder-decoder structure to locate target coarsely.
In the second stage, we introduce a feature interaction module to extract motion-aware features from consecutive point clouds and match them to refine target movement.
arXiv Detail & Related papers (2023-08-23T02:40:51Z) - Delving into Motion-Aware Matching for Monocular 3D Object Tracking [81.68608983602581]
We find that the motion cue of objects along different time frames is critical in 3D multi-object tracking.
We propose MoMA-M3T, a framework that mainly consists of three motion-aware components.
We conduct extensive experiments on the nuScenes and KITTI datasets to demonstrate our MoMA-M3T achieves competitive performance against state-of-the-art methods.
arXiv Detail & Related papers (2023-08-22T17:53:58Z) - ByteTrackV2: 2D and 3D Multi-Object Tracking by Associating Every
Detection Box [81.45219802386444]
Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects across video frames.
We propose a hierarchical data association strategy to mine the true objects in low-score detection boxes.
In 3D scenarios, it is much easier for the tracker to predict object velocities in the world coordinate.
arXiv Detail & Related papers (2023-03-27T15:35:21Z) - An Effective Motion-Centric Paradigm for 3D Single Object Tracking in
Point Clouds [50.19288542498838]
3D single object tracking in LiDAR point clouds (LiDAR SOT) plays a crucial role in autonomous driving.
Current approaches all follow the Siamese paradigm based on appearance matching.
We introduce a motion-centric paradigm to handle LiDAR SOT from a new perspective.
arXiv Detail & Related papers (2023-03-21T17:28:44Z) - A Lightweight and Detector-free 3D Single Object Tracker on Point Clouds [50.54083964183614]
It is non-trivial to perform accurate target-specific detection since the point cloud of objects in raw LiDAR scans is usually sparse and incomplete.
We propose DMT, a Detector-free Motion prediction based 3D Tracking network that totally removes the usage of complicated 3D detectors.
arXiv Detail & Related papers (2022-03-08T17:49:07Z) - Monocular Quasi-Dense 3D Object Tracking [99.51683944057191]
A reliable and accurate 3D tracking framework is essential for predicting future locations of surrounding objects and planning the observer's actions in numerous applications such as autonomous driving.
We propose a framework that can effectively associate moving objects over time and estimate their full 3D bounding box information from a sequence of 2D images captured on a moving platform.
arXiv Detail & Related papers (2021-03-12T15:30:02Z) - F-Siamese Tracker: A Frustum-based Double Siamese Network for 3D Single
Object Tracking [12.644452175343059]
A main challenge in 3D single object tracking is how to reduce search space for generating appropriate 3D candidates.
Instead of relying on 3D proposals, we produce 2D region proposals which are then extruded into 3D viewing frustums.
We perform an online accuracy validation on the 3D frustum to generate refined point cloud searching space.
arXiv Detail & Related papers (2020-10-22T08:01:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.