SeqTrack3D: Exploring Sequence Information for Robust 3D Point Cloud
Tracking
- URL: http://arxiv.org/abs/2402.16249v1
- Date: Mon, 26 Feb 2024 02:14:54 GMT
- Title: SeqTrack3D: Exploring Sequence Information for Robust 3D Point Cloud
Tracking
- Authors: Yu Lin, Zhiheng Li, Yubo Cui, Zheng Fang
- Abstract summary: We introduce Sequence-to-Sequence tracking paradigm and a tracker named SeqTrack3D to capture target motion across continuous frames.
This novel method ensures robust tracking by leveraging location priors from historical boxes, even in scenes with sparse points.
Experiments conducted on large-scale datasets show that SeqTrack3D achieves new state-of-the-art performances.
- Score: 26.405519771454102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D single object tracking (SOT) is an important and challenging task for the
autonomous driving and mobile robotics. Most existing methods perform tracking
between two consecutive frames while ignoring the motion patterns of the target
over a series of frames, which would cause performance degradation in the
scenes with sparse points. To break through this limitation, we introduce
Sequence-to-Sequence tracking paradigm and a tracker named SeqTrack3D to
capture target motion across continuous frames. Unlike previous methods that
primarily adopted three strategies: matching two consecutive point clouds,
predicting relative motion, or utilizing sequential point clouds to address
feature degradation, our SeqTrack3D combines both historical point clouds and
bounding box sequences. This novel method ensures robust tracking by leveraging
location priors from historical boxes, even in scenes with sparse points.
Extensive experiments conducted on large-scale datasets show that SeqTrack3D
achieves new state-of-the-art performances, improving by 6.00% on NuScenes and
14.13% on Waymo dataset. The code will be made public at
https://github.com/aron-lin/seqtrack3d.
Related papers
- 3D Single-object Tracking in Point Clouds with High Temporal Variation [79.5863632942935]
High temporal variation of point clouds is the key challenge of 3D single-object tracking (3D SOT)
Existing approaches rely on the assumption that the shape variation of the point clouds and the motion of the objects across neighboring frames are smooth.
We present a novel framework for 3D SOT in point clouds with high temporal variation, called HVTrack.
arXiv Detail & Related papers (2024-08-04T14:57:28Z) - EasyTrack: Efficient and Compact One-stream 3D Point Clouds Tracker [35.74677036815288]
We propose a neat and compact one-stream transformer 3D SOT paradigm, termed as textbfEasyTrack.
A 3D point clouds tracking feature pre-training module is developed to exploit the masked autoencoding for learning 3D point clouds tracking representations.
A target location network in the dense bird's eye view (BEV) feature space is constructed for target classification and regression.
arXiv Detail & Related papers (2024-04-09T02:47:52Z) - Motion-to-Matching: A Mixed Paradigm for 3D Single Object Tracking [27.805298263103495]
We propose MTM-Tracker, which combines motion modeling with feature matching into a single network.
In the first stage, we exploit the continuous historical boxes as motion prior and propose an encoder-decoder structure to locate target coarsely.
In the second stage, we introduce a feature interaction module to extract motion-aware features from consecutive point clouds and match them to refine target movement.
arXiv Detail & Related papers (2023-08-23T02:40:51Z) - ByteTrackV2: 2D and 3D Multi-Object Tracking by Associating Every
Detection Box [81.45219802386444]
Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects across video frames.
We propose a hierarchical data association strategy to mine the true objects in low-score detection boxes.
In 3D scenarios, it is much easier for the tracker to predict object velocities in the world coordinate.
arXiv Detail & Related papers (2023-03-27T15:35:21Z) - An Effective Motion-Centric Paradigm for 3D Single Object Tracking in
Point Clouds [50.19288542498838]
3D single object tracking in LiDAR point clouds (LiDAR SOT) plays a crucial role in autonomous driving.
Current approaches all follow the Siamese paradigm based on appearance matching.
We introduce a motion-centric paradigm to handle LiDAR SOT from a new perspective.
arXiv Detail & Related papers (2023-03-21T17:28:44Z) - Modeling Continuous Motion for 3D Point Cloud Object Tracking [54.48716096286417]
This paper presents a novel approach that views each tracklet as a continuous stream.
At each timestamp, only the current frame is fed into the network to interact with multi-frame historical features stored in a memory bank.
To enhance the utilization of multi-frame features for robust tracking, a contrastive sequence enhancement strategy is proposed.
arXiv Detail & Related papers (2023-03-14T02:58:27Z) - CXTrack: Improving 3D Point Cloud Tracking with Contextual Information [59.55870742072618]
3D single object tracking plays an essential role in many applications, such as autonomous driving.
We propose CXTrack, a novel transformer-based network for 3D object tracking.
We show that CXTrack achieves state-of-the-art tracking performance while running at 29 FPS.
arXiv Detail & Related papers (2022-11-12T11:29:01Z) - Beyond 3D Siamese Tracking: A Motion-Centric Paradigm for 3D Single
Object Tracking in Point Clouds [39.41305358466479]
3D single object tracking in LiDAR point clouds plays a crucial role in autonomous driving.
Current approaches all follow the Siamese paradigm based on appearance matching.
We introduce a motion-centric paradigm to handle 3D SOT from a new perspective.
arXiv Detail & Related papers (2022-03-03T14:20:10Z) - Monocular Quasi-Dense 3D Object Tracking [99.51683944057191]
A reliable and accurate 3D tracking framework is essential for predicting future locations of surrounding objects and planning the observer's actions in numerous applications such as autonomous driving.
We propose a framework that can effectively associate moving objects over time and estimate their full 3D bounding box information from a sequence of 2D images captured on a moving platform.
arXiv Detail & Related papers (2021-03-12T15:30:02Z) - PointTrackNet: An End-to-End Network For 3-D Object Detection and
Tracking From Point Clouds [13.174385375232161]
We propose PointTrackNet, an end-to-end 3-D object detection and tracking network.
It generates foreground masks, 3-D bounding boxes, and point-wise tracking association displacements for each detected object.
arXiv Detail & Related papers (2020-02-26T15:19:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.